00:00:00.001 Started by upstream project "autotest-per-patch" build number 121263 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.112 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.113 The recommended git tool is: git 00:00:00.113 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.215 Using shallow fetch with depth 1 00:00:00.215 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.215 > git --version # timeout=10 00:00:00.244 > git --version # 'git version 2.39.2' 00:00:00.244 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.972 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.986 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.998 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:07.998 > git config core.sparsecheckout # timeout=10 00:00:08.009 > git read-tree -mu HEAD # timeout=10 00:00:08.025 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:08.047 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:08.047 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:08.159 [Pipeline] Start of Pipeline 00:00:08.173 [Pipeline] library 00:00:08.174 Loading library shm_lib@master 00:00:08.174 Library shm_lib@master is cached. Copying from home. 00:00:08.191 [Pipeline] node 00:00:23.195 Still waiting to schedule task 00:00:23.195 Waiting for next available executor on ‘vagrant-vm-host’ 00:11:36.853 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:11:36.855 [Pipeline] { 00:11:36.871 [Pipeline] catchError 00:11:36.872 [Pipeline] { 00:11:36.885 [Pipeline] wrap 00:11:36.892 [Pipeline] { 00:11:36.900 [Pipeline] stage 00:11:36.901 [Pipeline] { (Prologue) 00:11:36.917 [Pipeline] echo 00:11:36.918 Node: VM-host-SM16 00:11:36.923 [Pipeline] cleanWs 00:11:36.932 [WS-CLEANUP] Deleting project workspace... 00:11:36.932 [WS-CLEANUP] Deferred wipeout is used... 00:11:36.937 [WS-CLEANUP] done 00:11:37.109 [Pipeline] setCustomBuildProperty 00:11:37.188 [Pipeline] nodesByLabel 00:11:37.189 Found a total of 1 nodes with the 'sorcerer' label 00:11:37.199 [Pipeline] httpRequest 00:11:37.203 HttpMethod: GET 00:11:37.204 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:11:37.206 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:11:37.209 Response Code: HTTP/1.1 200 OK 00:11:37.209 Success: Status code 200 is in the accepted range: 200,404 00:11:37.209 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:11:37.347 [Pipeline] sh 00:11:37.627 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:11:37.648 [Pipeline] httpRequest 00:11:37.653 HttpMethod: GET 00:11:37.653 URL: http://10.211.164.96/packages/spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:11:37.654 Sending request to url: http://10.211.164.96/packages/spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:11:37.654 Response Code: HTTP/1.1 200 OK 00:11:37.655 Success: Status code 200 is in the accepted range: 200,404 00:11:37.655 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:11:41.991 [Pipeline] sh 00:11:42.270 + tar --no-same-owner -xf spdk_f93182c78e3c077975126b50452fed761f9587e0.tar.gz 00:11:45.604 [Pipeline] sh 00:11:45.877 + git -C spdk log --oneline -n5 00:11:45.878 f93182c78 accel: remove flags 00:11:45.878 bebe61b53 util: remove spdk_iov_one() 00:11:45.878 975bb24ba nvmf: remove spdk_nvmf_subsytem_any_listener_allowed() 00:11:45.878 f8d98be2d nvmf: remove cb_fn/cb_arg from spdk_nvmf_qpair_disconnect() 00:11:45.878 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:11:45.893 [Pipeline] writeFile 00:11:45.905 [Pipeline] sh 00:11:46.185 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:11:46.196 [Pipeline] sh 00:11:46.474 + cat autorun-spdk.conf 00:11:46.474 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:46.474 SPDK_TEST_NVMF=1 00:11:46.474 SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:46.474 SPDK_TEST_USDT=1 00:11:46.474 SPDK_TEST_NVMF_MDNS=1 00:11:46.474 SPDK_RUN_UBSAN=1 00:11:46.474 NET_TYPE=virt 00:11:46.474 SPDK_JSONRPC_GO_CLIENT=1 00:11:46.474 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:46.480 RUN_NIGHTLY=0 00:11:46.482 [Pipeline] } 00:11:46.497 [Pipeline] // stage 00:11:46.510 [Pipeline] stage 00:11:46.511 [Pipeline] { (Run VM) 00:11:46.522 [Pipeline] sh 00:11:46.798 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:11:46.798 + echo 'Start stage prepare_nvme.sh' 00:11:46.798 Start stage prepare_nvme.sh 00:11:46.798 + [[ -n 1 ]] 00:11:46.798 + disk_prefix=ex1 00:11:46.798 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:11:46.798 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:11:46.798 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:11:46.798 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:46.798 ++ SPDK_TEST_NVMF=1 00:11:46.798 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:46.798 ++ SPDK_TEST_USDT=1 00:11:46.798 ++ SPDK_TEST_NVMF_MDNS=1 00:11:46.798 ++ SPDK_RUN_UBSAN=1 00:11:46.798 ++ NET_TYPE=virt 00:11:46.798 ++ SPDK_JSONRPC_GO_CLIENT=1 00:11:46.798 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:46.798 ++ RUN_NIGHTLY=0 00:11:46.798 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:11:46.798 + nvme_files=() 00:11:46.798 + declare -A nvme_files 00:11:46.798 + backend_dir=/var/lib/libvirt/images/backends 00:11:46.798 + nvme_files['nvme.img']=5G 00:11:46.798 + nvme_files['nvme-cmb.img']=5G 00:11:46.798 + nvme_files['nvme-multi0.img']=4G 00:11:46.798 + nvme_files['nvme-multi1.img']=4G 00:11:46.798 + nvme_files['nvme-multi2.img']=4G 00:11:46.798 + nvme_files['nvme-openstack.img']=8G 00:11:46.799 + nvme_files['nvme-zns.img']=5G 00:11:46.799 + (( SPDK_TEST_NVME_PMR == 1 )) 00:11:46.799 + (( SPDK_TEST_FTL == 1 )) 00:11:46.799 + (( SPDK_TEST_NVME_FDP == 1 )) 00:11:46.799 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:11:46.799 + for nvme in "${!nvme_files[@]}" 00:11:46.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:11:46.799 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:11:46.799 + for nvme in "${!nvme_files[@]}" 00:11:46.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:11:46.799 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:11:46.799 + for nvme in "${!nvme_files[@]}" 00:11:46.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:11:46.799 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:11:46.799 + for nvme in "${!nvme_files[@]}" 00:11:46.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:11:46.799 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:11:46.799 + for nvme in "${!nvme_files[@]}" 00:11:46.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:11:46.799 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:11:46.799 + for nvme in "${!nvme_files[@]}" 00:11:46.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:11:46.799 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:11:46.799 + for nvme in "${!nvme_files[@]}" 00:11:46.799 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:11:47.365 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:11:47.365 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:11:47.365 + echo 'End stage prepare_nvme.sh' 00:11:47.365 End stage prepare_nvme.sh 00:11:47.377 [Pipeline] sh 00:11:47.710 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:11:47.710 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:11:47.710 00:11:47.710 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:11:47.710 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:11:47.710 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:11:47.710 HELP=0 00:11:47.710 DRY_RUN=0 00:11:47.710 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:11:47.710 NVME_DISKS_TYPE=nvme,nvme, 00:11:47.710 NVME_AUTO_CREATE=0 00:11:47.710 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:11:47.710 NVME_CMB=,, 00:11:47.710 NVME_PMR=,, 00:11:47.710 NVME_ZNS=,, 00:11:47.710 NVME_MS=,, 00:11:47.710 NVME_FDP=,, 00:11:47.710 SPDK_VAGRANT_DISTRO=fedora38 00:11:47.710 SPDK_VAGRANT_VMCPU=10 00:11:47.710 SPDK_VAGRANT_VMRAM=12288 00:11:47.710 SPDK_VAGRANT_PROVIDER=libvirt 00:11:47.710 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:11:47.710 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:11:47.710 SPDK_OPENSTACK_NETWORK=0 00:11:47.710 VAGRANT_PACKAGE_BOX=0 00:11:47.710 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:11:47.710 FORCE_DISTRO=true 00:11:47.710 VAGRANT_BOX_VERSION= 00:11:47.710 EXTRA_VAGRANTFILES= 00:11:47.710 NIC_MODEL=e1000 00:11:47.710 00:11:47.710 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:11:47.710 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:11:50.995 Bringing machine 'default' up with 'libvirt' provider... 00:11:51.253 ==> default: Creating image (snapshot of base box volume). 00:11:51.512 ==> default: Creating domain with the following settings... 00:11:51.512 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1714137728_c0cda861da7bb0b0d5c3 00:11:51.512 ==> default: -- Domain type: kvm 00:11:51.512 ==> default: -- Cpus: 10 00:11:51.512 ==> default: -- Feature: acpi 00:11:51.512 ==> default: -- Feature: apic 00:11:51.512 ==> default: -- Feature: pae 00:11:51.512 ==> default: -- Memory: 12288M 00:11:51.512 ==> default: -- Memory Backing: hugepages: 00:11:51.512 ==> default: -- Management MAC: 00:11:51.512 ==> default: -- Loader: 00:11:51.512 ==> default: -- Nvram: 00:11:51.512 ==> default: -- Base box: spdk/fedora38 00:11:51.512 ==> default: -- Storage pool: default 00:11:51.512 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1714137728_c0cda861da7bb0b0d5c3.img (20G) 00:11:51.512 ==> default: -- Volume Cache: default 00:11:51.512 ==> default: -- Kernel: 00:11:51.512 ==> default: -- Initrd: 00:11:51.512 ==> default: -- Graphics Type: vnc 00:11:51.512 ==> default: -- Graphics Port: -1 00:11:51.512 ==> default: -- Graphics IP: 127.0.0.1 00:11:51.512 ==> default: -- Graphics Password: Not defined 00:11:51.512 ==> default: -- Video Type: cirrus 00:11:51.512 ==> default: -- Video VRAM: 9216 00:11:51.512 ==> default: -- Sound Type: 00:11:51.512 ==> default: -- Keymap: en-us 00:11:51.512 ==> default: -- TPM Path: 00:11:51.512 ==> default: -- INPUT: type=mouse, bus=ps2 00:11:51.512 ==> default: -- Command line args: 00:11:51.512 ==> default: -> value=-device, 00:11:51.512 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:11:51.512 ==> default: -> value=-drive, 00:11:51.512 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:11:51.512 ==> default: -> value=-device, 00:11:51.512 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.512 ==> default: -> value=-device, 00:11:51.512 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:11:51.512 ==> default: -> value=-drive, 00:11:51.512 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:11:51.512 ==> default: -> value=-device, 00:11:51.512 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.512 ==> default: -> value=-drive, 00:11:51.512 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:11:51.512 ==> default: -> value=-device, 00:11:51.512 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.512 ==> default: -> value=-drive, 00:11:51.512 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:11:51.512 ==> default: -> value=-device, 00:11:51.512 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.771 ==> default: Creating shared folders metadata... 00:11:51.771 ==> default: Starting domain. 00:11:53.686 ==> default: Waiting for domain to get an IP address... 00:12:15.652 ==> default: Waiting for SSH to become available... 00:12:15.652 ==> default: Configuring and enabling network interfaces... 00:12:17.553 default: SSH address: 192.168.121.110:22 00:12:17.553 default: SSH username: vagrant 00:12:17.553 default: SSH auth method: private key 00:12:20.082 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:12:28.189 ==> default: Mounting SSHFS shared folder... 00:12:28.756 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:12:28.756 ==> default: Checking Mount.. 00:12:29.691 ==> default: Folder Successfully Mounted! 00:12:29.691 ==> default: Running provisioner: file... 00:12:30.626 default: ~/.gitconfig => .gitconfig 00:12:30.884 00:12:30.884 SUCCESS! 00:12:30.884 00:12:30.884 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:12:30.884 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:12:30.884 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:12:30.884 00:12:31.153 [Pipeline] } 00:12:31.172 [Pipeline] // stage 00:12:31.181 [Pipeline] dir 00:12:31.181 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:12:31.182 [Pipeline] { 00:12:31.196 [Pipeline] catchError 00:12:31.197 [Pipeline] { 00:12:31.211 [Pipeline] sh 00:12:31.489 + vagrant ssh-config --host vagrant 00:12:31.490 + sed -ne /^Host/,$p 00:12:31.490 + tee ssh_conf 00:12:35.671 Host vagrant 00:12:35.671 HostName 192.168.121.110 00:12:35.671 User vagrant 00:12:35.671 Port 22 00:12:35.671 UserKnownHostsFile /dev/null 00:12:35.671 StrictHostKeyChecking no 00:12:35.671 PasswordAuthentication no 00:12:35.671 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:12:35.671 IdentitiesOnly yes 00:12:35.671 LogLevel FATAL 00:12:35.671 ForwardAgent yes 00:12:35.671 ForwardX11 yes 00:12:35.671 00:12:35.685 [Pipeline] withEnv 00:12:35.688 [Pipeline] { 00:12:35.705 [Pipeline] sh 00:12:35.982 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:12:35.982 source /etc/os-release 00:12:35.982 [[ -e /image.version ]] && img=$(< /image.version) 00:12:35.982 # Minimal, systemd-like check. 00:12:35.982 if [[ -e /.dockerenv ]]; then 00:12:35.982 # Clear garbage from the node's name: 00:12:35.982 # agt-er_autotest_547-896 -> autotest_547-896 00:12:35.982 # $HOSTNAME is the actual container id 00:12:35.982 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:12:35.982 if mountpoint -q /etc/hostname; then 00:12:35.982 # We can assume this is a mount from a host where container is running, 00:12:35.982 # so fetch its hostname to easily identify the target swarm worker. 00:12:35.982 container="$(< /etc/hostname) ($agent)" 00:12:35.982 else 00:12:35.982 # Fallback 00:12:35.982 container=$agent 00:12:35.982 fi 00:12:35.982 fi 00:12:35.982 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:12:35.982 00:12:36.250 [Pipeline] } 00:12:36.272 [Pipeline] // withEnv 00:12:36.283 [Pipeline] setCustomBuildProperty 00:12:36.298 [Pipeline] stage 00:12:36.301 [Pipeline] { (Tests) 00:12:36.320 [Pipeline] sh 00:12:36.601 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:12:36.874 [Pipeline] timeout 00:12:36.874 Timeout set to expire in 40 min 00:12:36.876 [Pipeline] { 00:12:36.888 [Pipeline] sh 00:12:37.160 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:12:37.725 HEAD is now at f93182c78 accel: remove flags 00:12:37.739 [Pipeline] sh 00:12:38.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:12:38.288 [Pipeline] sh 00:12:38.616 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:12:38.889 [Pipeline] sh 00:12:39.169 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:12:39.169 ++ readlink -f spdk_repo 00:12:39.169 + DIR_ROOT=/home/vagrant/spdk_repo 00:12:39.169 + [[ -n /home/vagrant/spdk_repo ]] 00:12:39.169 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:12:39.169 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:12:39.169 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:12:39.169 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:12:39.169 + [[ -d /home/vagrant/spdk_repo/output ]] 00:12:39.169 + cd /home/vagrant/spdk_repo 00:12:39.169 + source /etc/os-release 00:12:39.169 ++ NAME='Fedora Linux' 00:12:39.169 ++ VERSION='38 (Cloud Edition)' 00:12:39.169 ++ ID=fedora 00:12:39.169 ++ VERSION_ID=38 00:12:39.169 ++ VERSION_CODENAME= 00:12:39.169 ++ PLATFORM_ID=platform:f38 00:12:39.169 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:12:39.169 ++ ANSI_COLOR='0;38;2;60;110;180' 00:12:39.169 ++ LOGO=fedora-logo-icon 00:12:39.169 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:12:39.169 ++ HOME_URL=https://fedoraproject.org/ 00:12:39.169 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:12:39.169 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:12:39.169 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:12:39.169 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:12:39.169 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:12:39.169 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:12:39.169 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:12:39.169 ++ SUPPORT_END=2024-05-14 00:12:39.169 ++ VARIANT='Cloud Edition' 00:12:39.169 ++ VARIANT_ID=cloud 00:12:39.169 + uname -a 00:12:39.427 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:12:39.428 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:39.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:39.686 Hugepages 00:12:39.686 node hugesize free / total 00:12:39.686 node0 1048576kB 0 / 0 00:12:39.686 node0 2048kB 0 / 0 00:12:39.686 00:12:39.686 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:39.944 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:39.944 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:39.944 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:12:39.944 + rm -f /tmp/spdk-ld-path 00:12:39.944 + source autorun-spdk.conf 00:12:39.944 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:39.944 ++ SPDK_TEST_NVMF=1 00:12:39.944 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:39.944 ++ SPDK_TEST_USDT=1 00:12:39.944 ++ SPDK_TEST_NVMF_MDNS=1 00:12:39.944 ++ SPDK_RUN_UBSAN=1 00:12:39.944 ++ NET_TYPE=virt 00:12:39.944 ++ SPDK_JSONRPC_GO_CLIENT=1 00:12:39.944 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:39.944 ++ RUN_NIGHTLY=0 00:12:39.944 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:12:39.944 + [[ -n '' ]] 00:12:39.944 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:12:39.944 + for M in /var/spdk/build-*-manifest.txt 00:12:39.944 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:12:39.944 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:39.944 + for M in /var/spdk/build-*-manifest.txt 00:12:39.944 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:12:39.944 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:39.944 ++ uname 00:12:39.944 + [[ Linux == \L\i\n\u\x ]] 00:12:39.944 + sudo dmesg -T 00:12:39.944 + sudo dmesg --clear 00:12:39.944 + dmesg_pid=5249 00:12:39.944 + sudo dmesg -Tw 00:12:39.944 + [[ Fedora Linux == FreeBSD ]] 00:12:39.944 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:39.944 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:39.944 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:12:39.944 + [[ -x /usr/src/fio-static/fio ]] 00:12:39.944 + export FIO_BIN=/usr/src/fio-static/fio 00:12:39.944 + FIO_BIN=/usr/src/fio-static/fio 00:12:39.944 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:12:39.944 + [[ ! -v VFIO_QEMU_BIN ]] 00:12:39.944 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:12:39.944 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:39.944 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:39.944 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:12:39.944 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:39.944 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:39.944 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:39.944 Test configuration: 00:12:39.944 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:39.944 SPDK_TEST_NVMF=1 00:12:39.944 SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:39.944 SPDK_TEST_USDT=1 00:12:39.944 SPDK_TEST_NVMF_MDNS=1 00:12:39.944 SPDK_RUN_UBSAN=1 00:12:39.944 NET_TYPE=virt 00:12:39.945 SPDK_JSONRPC_GO_CLIENT=1 00:12:39.945 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:40.204 RUN_NIGHTLY=0 13:22:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.204 13:22:57 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:40.204 13:22:57 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.204 13:22:57 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.204 13:22:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.204 13:22:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.204 13:22:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.204 13:22:57 -- paths/export.sh@5 -- $ export PATH 00:12:40.204 13:22:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.204 13:22:57 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:12:40.204 13:22:57 -- common/autobuild_common.sh@435 -- $ date +%s 00:12:40.204 13:22:57 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714137777.XXXXXX 00:12:40.204 13:22:57 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714137777.sBHkNY 00:12:40.204 13:22:57 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:12:40.204 13:22:57 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:12:40.204 13:22:57 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:12:40.204 13:22:57 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:12:40.204 13:22:57 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:12:40.204 13:22:57 -- common/autobuild_common.sh@451 -- $ get_config_params 00:12:40.204 13:22:57 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:12:40.204 13:22:57 -- common/autotest_common.sh@10 -- $ set +x 00:12:40.204 13:22:57 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:12:40.204 13:22:57 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:12:40.204 13:22:57 -- pm/common@17 -- $ local monitor 00:12:40.204 13:22:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.204 13:22:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5283 00:12:40.204 13:22:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:40.204 13:22:57 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5285 00:12:40.204 13:22:57 -- pm/common@21 -- $ date +%s 00:12:40.204 13:22:57 -- pm/common@26 -- $ sleep 1 00:12:40.204 13:22:57 -- pm/common@21 -- $ date +%s 00:12:40.204 13:22:57 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714137777 00:12:40.204 13:22:57 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714137777 00:12:40.204 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714137777_collect-vmstat.pm.log 00:12:40.204 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714137777_collect-cpu-load.pm.log 00:12:41.173 13:22:58 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:12:41.173 13:22:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:12:41.173 13:22:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:12:41.173 13:22:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:12:41.173 13:22:58 -- spdk/autobuild.sh@16 -- $ date -u 00:12:41.173 Fri Apr 26 01:22:58 PM UTC 2024 00:12:41.173 13:22:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:12:41.173 v24.05-pre-450-gf93182c78 00:12:41.173 13:22:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:12:41.173 13:22:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:12:41.173 13:22:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:12:41.173 13:22:58 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:12:41.173 13:22:58 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:12:41.173 13:22:58 -- common/autotest_common.sh@10 -- $ set +x 00:12:41.173 ************************************ 00:12:41.173 START TEST ubsan 00:12:41.173 ************************************ 00:12:41.173 using ubsan 00:12:41.173 13:22:58 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:12:41.173 00:12:41.173 real 0m0.000s 00:12:41.173 user 0m0.000s 00:12:41.173 sys 0m0.000s 00:12:41.173 13:22:58 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:12:41.173 13:22:58 -- common/autotest_common.sh@10 -- $ set +x 00:12:41.173 ************************************ 00:12:41.173 END TEST ubsan 00:12:41.173 ************************************ 00:12:41.432 13:22:58 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:12:41.432 13:22:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:12:41.432 13:22:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:12:41.432 13:22:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:12:41.432 13:22:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:12:41.432 13:22:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:12:41.432 13:22:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:12:41.432 13:22:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:12:41.432 13:22:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:12:41.432 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:41.432 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:41.998 Using 'verbs' RDMA provider 00:12:57.515 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:13:09.718 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:13:09.718 go version go1.21.1 linux/amd64 00:13:09.718 Creating mk/config.mk...done. 00:13:09.718 Creating mk/cc.flags.mk...done. 00:13:09.718 Type 'make' to build. 00:13:09.718 13:23:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:13:09.718 13:23:26 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:13:09.718 13:23:26 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:13:09.718 13:23:26 -- common/autotest_common.sh@10 -- $ set +x 00:13:09.718 ************************************ 00:13:09.718 START TEST make 00:13:09.718 ************************************ 00:13:09.718 13:23:26 -- common/autotest_common.sh@1111 -- $ make -j10 00:13:09.718 make[1]: Nothing to be done for 'all'. 00:13:24.614 The Meson build system 00:13:24.614 Version: 1.3.1 00:13:24.614 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:13:24.614 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:24.614 Build type: native build 00:13:24.614 Program cat found: YES (/usr/bin/cat) 00:13:24.614 Project name: DPDK 00:13:24.614 Project version: 23.11.0 00:13:24.614 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:13:24.614 C linker for the host machine: cc ld.bfd 2.39-16 00:13:24.614 Host machine cpu family: x86_64 00:13:24.614 Host machine cpu: x86_64 00:13:24.614 Message: ## Building in Developer Mode ## 00:13:24.614 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:24.614 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:24.614 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:24.614 Program python3 found: YES (/usr/bin/python3) 00:13:24.614 Program cat found: YES (/usr/bin/cat) 00:13:24.614 Compiler for C supports arguments -march=native: YES 00:13:24.614 Checking for size of "void *" : 8 00:13:24.614 Checking for size of "void *" : 8 (cached) 00:13:24.614 Library m found: YES 00:13:24.614 Library numa found: YES 00:13:24.614 Has header "numaif.h" : YES 00:13:24.614 Library fdt found: NO 00:13:24.614 Library execinfo found: NO 00:13:24.614 Has header "execinfo.h" : YES 00:13:24.614 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:13:24.614 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:24.614 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:24.614 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:24.614 Run-time dependency openssl found: YES 3.0.9 00:13:24.614 Run-time dependency libpcap found: YES 1.10.4 00:13:24.614 Has header "pcap.h" with dependency libpcap: YES 00:13:24.614 Compiler for C supports arguments -Wcast-qual: YES 00:13:24.614 Compiler for C supports arguments -Wdeprecated: YES 00:13:24.614 Compiler for C supports arguments -Wformat: YES 00:13:24.614 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:24.614 Compiler for C supports arguments -Wformat-security: NO 00:13:24.614 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:24.614 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:24.614 Compiler for C supports arguments -Wnested-externs: YES 00:13:24.614 Compiler for C supports arguments -Wold-style-definition: YES 00:13:24.614 Compiler for C supports arguments -Wpointer-arith: YES 00:13:24.614 Compiler for C supports arguments -Wsign-compare: YES 00:13:24.614 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:24.614 Compiler for C supports arguments -Wundef: YES 00:13:24.614 Compiler for C supports arguments -Wwrite-strings: YES 00:13:24.614 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:24.614 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:24.614 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:24.614 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:24.614 Program objdump found: YES (/usr/bin/objdump) 00:13:24.614 Compiler for C supports arguments -mavx512f: YES 00:13:24.614 Checking if "AVX512 checking" compiles: YES 00:13:24.614 Fetching value of define "__SSE4_2__" : 1 00:13:24.614 Fetching value of define "__AES__" : 1 00:13:24.614 Fetching value of define "__AVX__" : 1 00:13:24.614 Fetching value of define "__AVX2__" : 1 00:13:24.614 Fetching value of define "__AVX512BW__" : (undefined) 00:13:24.614 Fetching value of define "__AVX512CD__" : (undefined) 00:13:24.614 Fetching value of define "__AVX512DQ__" : (undefined) 00:13:24.614 Fetching value of define "__AVX512F__" : (undefined) 00:13:24.614 Fetching value of define "__AVX512VL__" : (undefined) 00:13:24.614 Fetching value of define "__PCLMUL__" : 1 00:13:24.614 Fetching value of define "__RDRND__" : 1 00:13:24.614 Fetching value of define "__RDSEED__" : 1 00:13:24.614 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:13:24.614 Fetching value of define "__znver1__" : (undefined) 00:13:24.614 Fetching value of define "__znver2__" : (undefined) 00:13:24.614 Fetching value of define "__znver3__" : (undefined) 00:13:24.614 Fetching value of define "__znver4__" : (undefined) 00:13:24.614 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:24.614 Message: lib/log: Defining dependency "log" 00:13:24.614 Message: lib/kvargs: Defining dependency "kvargs" 00:13:24.614 Message: lib/telemetry: Defining dependency "telemetry" 00:13:24.614 Checking for function "getentropy" : NO 00:13:24.614 Message: lib/eal: Defining dependency "eal" 00:13:24.614 Message: lib/ring: Defining dependency "ring" 00:13:24.614 Message: lib/rcu: Defining dependency "rcu" 00:13:24.614 Message: lib/mempool: Defining dependency "mempool" 00:13:24.614 Message: lib/mbuf: Defining dependency "mbuf" 00:13:24.614 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:24.614 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:13:24.614 Compiler for C supports arguments -mpclmul: YES 00:13:24.614 Compiler for C supports arguments -maes: YES 00:13:24.614 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:24.614 Compiler for C supports arguments -mavx512bw: YES 00:13:24.614 Compiler for C supports arguments -mavx512dq: YES 00:13:24.614 Compiler for C supports arguments -mavx512vl: YES 00:13:24.614 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:24.614 Compiler for C supports arguments -mavx2: YES 00:13:24.614 Compiler for C supports arguments -mavx: YES 00:13:24.614 Message: lib/net: Defining dependency "net" 00:13:24.614 Message: lib/meter: Defining dependency "meter" 00:13:24.614 Message: lib/ethdev: Defining dependency "ethdev" 00:13:24.614 Message: lib/pci: Defining dependency "pci" 00:13:24.614 Message: lib/cmdline: Defining dependency "cmdline" 00:13:24.614 Message: lib/hash: Defining dependency "hash" 00:13:24.614 Message: lib/timer: Defining dependency "timer" 00:13:24.615 Message: lib/compressdev: Defining dependency "compressdev" 00:13:24.615 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:24.615 Message: lib/dmadev: Defining dependency "dmadev" 00:13:24.615 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:24.615 Message: lib/power: Defining dependency "power" 00:13:24.615 Message: lib/reorder: Defining dependency "reorder" 00:13:24.615 Message: lib/security: Defining dependency "security" 00:13:24.615 Has header "linux/userfaultfd.h" : YES 00:13:24.615 Has header "linux/vduse.h" : YES 00:13:24.615 Message: lib/vhost: Defining dependency "vhost" 00:13:24.615 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:24.615 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:24.615 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:24.615 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:24.615 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:24.615 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:24.615 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:24.615 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:24.615 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:24.615 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:24.615 Program doxygen found: YES (/usr/bin/doxygen) 00:13:24.615 Configuring doxy-api-html.conf using configuration 00:13:24.615 Configuring doxy-api-man.conf using configuration 00:13:24.615 Program mandb found: YES (/usr/bin/mandb) 00:13:24.615 Program sphinx-build found: NO 00:13:24.615 Configuring rte_build_config.h using configuration 00:13:24.615 Message: 00:13:24.615 ================= 00:13:24.615 Applications Enabled 00:13:24.615 ================= 00:13:24.615 00:13:24.615 apps: 00:13:24.615 00:13:24.615 00:13:24.615 Message: 00:13:24.615 ================= 00:13:24.615 Libraries Enabled 00:13:24.615 ================= 00:13:24.615 00:13:24.615 libs: 00:13:24.615 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:24.615 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:24.615 cryptodev, dmadev, power, reorder, security, vhost, 00:13:24.615 00:13:24.615 Message: 00:13:24.615 =============== 00:13:24.615 Drivers Enabled 00:13:24.615 =============== 00:13:24.615 00:13:24.615 common: 00:13:24.615 00:13:24.615 bus: 00:13:24.615 pci, vdev, 00:13:24.615 mempool: 00:13:24.615 ring, 00:13:24.615 dma: 00:13:24.615 00:13:24.615 net: 00:13:24.615 00:13:24.615 crypto: 00:13:24.615 00:13:24.615 compress: 00:13:24.615 00:13:24.615 vdpa: 00:13:24.615 00:13:24.615 00:13:24.615 Message: 00:13:24.615 ================= 00:13:24.615 Content Skipped 00:13:24.615 ================= 00:13:24.615 00:13:24.615 apps: 00:13:24.615 dumpcap: explicitly disabled via build config 00:13:24.615 graph: explicitly disabled via build config 00:13:24.615 pdump: explicitly disabled via build config 00:13:24.615 proc-info: explicitly disabled via build config 00:13:24.615 test-acl: explicitly disabled via build config 00:13:24.615 test-bbdev: explicitly disabled via build config 00:13:24.615 test-cmdline: explicitly disabled via build config 00:13:24.615 test-compress-perf: explicitly disabled via build config 00:13:24.615 test-crypto-perf: explicitly disabled via build config 00:13:24.615 test-dma-perf: explicitly disabled via build config 00:13:24.615 test-eventdev: explicitly disabled via build config 00:13:24.615 test-fib: explicitly disabled via build config 00:13:24.615 test-flow-perf: explicitly disabled via build config 00:13:24.615 test-gpudev: explicitly disabled via build config 00:13:24.615 test-mldev: explicitly disabled via build config 00:13:24.615 test-pipeline: explicitly disabled via build config 00:13:24.615 test-pmd: explicitly disabled via build config 00:13:24.615 test-regex: explicitly disabled via build config 00:13:24.615 test-sad: explicitly disabled via build config 00:13:24.615 test-security-perf: explicitly disabled via build config 00:13:24.615 00:13:24.615 libs: 00:13:24.615 metrics: explicitly disabled via build config 00:13:24.615 acl: explicitly disabled via build config 00:13:24.615 bbdev: explicitly disabled via build config 00:13:24.615 bitratestats: explicitly disabled via build config 00:13:24.615 bpf: explicitly disabled via build config 00:13:24.615 cfgfile: explicitly disabled via build config 00:13:24.615 distributor: explicitly disabled via build config 00:13:24.615 efd: explicitly disabled via build config 00:13:24.615 eventdev: explicitly disabled via build config 00:13:24.615 dispatcher: explicitly disabled via build config 00:13:24.615 gpudev: explicitly disabled via build config 00:13:24.615 gro: explicitly disabled via build config 00:13:24.615 gso: explicitly disabled via build config 00:13:24.615 ip_frag: explicitly disabled via build config 00:13:24.615 jobstats: explicitly disabled via build config 00:13:24.615 latencystats: explicitly disabled via build config 00:13:24.615 lpm: explicitly disabled via build config 00:13:24.615 member: explicitly disabled via build config 00:13:24.615 pcapng: explicitly disabled via build config 00:13:24.615 rawdev: explicitly disabled via build config 00:13:24.615 regexdev: explicitly disabled via build config 00:13:24.615 mldev: explicitly disabled via build config 00:13:24.615 rib: explicitly disabled via build config 00:13:24.615 sched: explicitly disabled via build config 00:13:24.615 stack: explicitly disabled via build config 00:13:24.615 ipsec: explicitly disabled via build config 00:13:24.615 pdcp: explicitly disabled via build config 00:13:24.615 fib: explicitly disabled via build config 00:13:24.615 port: explicitly disabled via build config 00:13:24.615 pdump: explicitly disabled via build config 00:13:24.615 table: explicitly disabled via build config 00:13:24.615 pipeline: explicitly disabled via build config 00:13:24.615 graph: explicitly disabled via build config 00:13:24.615 node: explicitly disabled via build config 00:13:24.615 00:13:24.615 drivers: 00:13:24.615 common/cpt: not in enabled drivers build config 00:13:24.615 common/dpaax: not in enabled drivers build config 00:13:24.615 common/iavf: not in enabled drivers build config 00:13:24.615 common/idpf: not in enabled drivers build config 00:13:24.615 common/mvep: not in enabled drivers build config 00:13:24.615 common/octeontx: not in enabled drivers build config 00:13:24.615 bus/auxiliary: not in enabled drivers build config 00:13:24.615 bus/cdx: not in enabled drivers build config 00:13:24.615 bus/dpaa: not in enabled drivers build config 00:13:24.615 bus/fslmc: not in enabled drivers build config 00:13:24.615 bus/ifpga: not in enabled drivers build config 00:13:24.615 bus/platform: not in enabled drivers build config 00:13:24.615 bus/vmbus: not in enabled drivers build config 00:13:24.615 common/cnxk: not in enabled drivers build config 00:13:24.615 common/mlx5: not in enabled drivers build config 00:13:24.615 common/nfp: not in enabled drivers build config 00:13:24.615 common/qat: not in enabled drivers build config 00:13:24.615 common/sfc_efx: not in enabled drivers build config 00:13:24.615 mempool/bucket: not in enabled drivers build config 00:13:24.615 mempool/cnxk: not in enabled drivers build config 00:13:24.615 mempool/dpaa: not in enabled drivers build config 00:13:24.615 mempool/dpaa2: not in enabled drivers build config 00:13:24.615 mempool/octeontx: not in enabled drivers build config 00:13:24.615 mempool/stack: not in enabled drivers build config 00:13:24.615 dma/cnxk: not in enabled drivers build config 00:13:24.615 dma/dpaa: not in enabled drivers build config 00:13:24.615 dma/dpaa2: not in enabled drivers build config 00:13:24.615 dma/hisilicon: not in enabled drivers build config 00:13:24.615 dma/idxd: not in enabled drivers build config 00:13:24.615 dma/ioat: not in enabled drivers build config 00:13:24.615 dma/skeleton: not in enabled drivers build config 00:13:24.615 net/af_packet: not in enabled drivers build config 00:13:24.615 net/af_xdp: not in enabled drivers build config 00:13:24.615 net/ark: not in enabled drivers build config 00:13:24.615 net/atlantic: not in enabled drivers build config 00:13:24.615 net/avp: not in enabled drivers build config 00:13:24.615 net/axgbe: not in enabled drivers build config 00:13:24.615 net/bnx2x: not in enabled drivers build config 00:13:24.615 net/bnxt: not in enabled drivers build config 00:13:24.615 net/bonding: not in enabled drivers build config 00:13:24.615 net/cnxk: not in enabled drivers build config 00:13:24.615 net/cpfl: not in enabled drivers build config 00:13:24.615 net/cxgbe: not in enabled drivers build config 00:13:24.615 net/dpaa: not in enabled drivers build config 00:13:24.615 net/dpaa2: not in enabled drivers build config 00:13:24.615 net/e1000: not in enabled drivers build config 00:13:24.615 net/ena: not in enabled drivers build config 00:13:24.615 net/enetc: not in enabled drivers build config 00:13:24.615 net/enetfec: not in enabled drivers build config 00:13:24.615 net/enic: not in enabled drivers build config 00:13:24.615 net/failsafe: not in enabled drivers build config 00:13:24.615 net/fm10k: not in enabled drivers build config 00:13:24.615 net/gve: not in enabled drivers build config 00:13:24.615 net/hinic: not in enabled drivers build config 00:13:24.615 net/hns3: not in enabled drivers build config 00:13:24.615 net/i40e: not in enabled drivers build config 00:13:24.615 net/iavf: not in enabled drivers build config 00:13:24.615 net/ice: not in enabled drivers build config 00:13:24.615 net/idpf: not in enabled drivers build config 00:13:24.615 net/igc: not in enabled drivers build config 00:13:24.615 net/ionic: not in enabled drivers build config 00:13:24.615 net/ipn3ke: not in enabled drivers build config 00:13:24.615 net/ixgbe: not in enabled drivers build config 00:13:24.615 net/mana: not in enabled drivers build config 00:13:24.615 net/memif: not in enabled drivers build config 00:13:24.615 net/mlx4: not in enabled drivers build config 00:13:24.615 net/mlx5: not in enabled drivers build config 00:13:24.615 net/mvneta: not in enabled drivers build config 00:13:24.615 net/mvpp2: not in enabled drivers build config 00:13:24.616 net/netvsc: not in enabled drivers build config 00:13:24.616 net/nfb: not in enabled drivers build config 00:13:24.616 net/nfp: not in enabled drivers build config 00:13:24.616 net/ngbe: not in enabled drivers build config 00:13:24.616 net/null: not in enabled drivers build config 00:13:24.616 net/octeontx: not in enabled drivers build config 00:13:24.616 net/octeon_ep: not in enabled drivers build config 00:13:24.616 net/pcap: not in enabled drivers build config 00:13:24.616 net/pfe: not in enabled drivers build config 00:13:24.616 net/qede: not in enabled drivers build config 00:13:24.616 net/ring: not in enabled drivers build config 00:13:24.616 net/sfc: not in enabled drivers build config 00:13:24.616 net/softnic: not in enabled drivers build config 00:13:24.616 net/tap: not in enabled drivers build config 00:13:24.616 net/thunderx: not in enabled drivers build config 00:13:24.616 net/txgbe: not in enabled drivers build config 00:13:24.616 net/vdev_netvsc: not in enabled drivers build config 00:13:24.616 net/vhost: not in enabled drivers build config 00:13:24.616 net/virtio: not in enabled drivers build config 00:13:24.616 net/vmxnet3: not in enabled drivers build config 00:13:24.616 raw/*: missing internal dependency, "rawdev" 00:13:24.616 crypto/armv8: not in enabled drivers build config 00:13:24.616 crypto/bcmfs: not in enabled drivers build config 00:13:24.616 crypto/caam_jr: not in enabled drivers build config 00:13:24.616 crypto/ccp: not in enabled drivers build config 00:13:24.616 crypto/cnxk: not in enabled drivers build config 00:13:24.616 crypto/dpaa_sec: not in enabled drivers build config 00:13:24.616 crypto/dpaa2_sec: not in enabled drivers build config 00:13:24.616 crypto/ipsec_mb: not in enabled drivers build config 00:13:24.616 crypto/mlx5: not in enabled drivers build config 00:13:24.616 crypto/mvsam: not in enabled drivers build config 00:13:24.616 crypto/nitrox: not in enabled drivers build config 00:13:24.616 crypto/null: not in enabled drivers build config 00:13:24.616 crypto/octeontx: not in enabled drivers build config 00:13:24.616 crypto/openssl: not in enabled drivers build config 00:13:24.616 crypto/scheduler: not in enabled drivers build config 00:13:24.616 crypto/uadk: not in enabled drivers build config 00:13:24.616 crypto/virtio: not in enabled drivers build config 00:13:24.616 compress/isal: not in enabled drivers build config 00:13:24.616 compress/mlx5: not in enabled drivers build config 00:13:24.616 compress/octeontx: not in enabled drivers build config 00:13:24.616 compress/zlib: not in enabled drivers build config 00:13:24.616 regex/*: missing internal dependency, "regexdev" 00:13:24.616 ml/*: missing internal dependency, "mldev" 00:13:24.616 vdpa/ifc: not in enabled drivers build config 00:13:24.616 vdpa/mlx5: not in enabled drivers build config 00:13:24.616 vdpa/nfp: not in enabled drivers build config 00:13:24.616 vdpa/sfc: not in enabled drivers build config 00:13:24.616 event/*: missing internal dependency, "eventdev" 00:13:24.616 baseband/*: missing internal dependency, "bbdev" 00:13:24.616 gpu/*: missing internal dependency, "gpudev" 00:13:24.616 00:13:24.616 00:13:24.616 Build targets in project: 85 00:13:24.616 00:13:24.616 DPDK 23.11.0 00:13:24.616 00:13:24.616 User defined options 00:13:24.616 buildtype : debug 00:13:24.616 default_library : shared 00:13:24.616 libdir : lib 00:13:24.616 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:24.616 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:13:24.616 c_link_args : 00:13:24.616 cpu_instruction_set: native 00:13:24.616 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:24.616 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:24.616 enable_docs : false 00:13:24.616 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:13:24.616 enable_kmods : false 00:13:24.616 tests : false 00:13:24.616 00:13:24.616 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:24.616 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:24.616 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:24.616 [2/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:24.616 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:24.616 [4/265] Linking static target lib/librte_kvargs.a 00:13:24.616 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:24.616 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:24.616 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:24.616 [8/265] Linking static target lib/librte_log.a 00:13:24.616 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:24.616 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:24.616 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.616 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:24.616 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:24.616 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:24.616 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:24.616 [16/265] Linking static target lib/librte_telemetry.a 00:13:24.616 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.616 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:24.616 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:24.616 [20/265] Linking target lib/librte_log.so.24.0 00:13:24.616 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:24.875 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:24.875 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:13:24.875 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:24.875 [25/265] Linking target lib/librte_kvargs.so.24.0 00:13:24.875 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:25.133 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:13:25.133 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:25.391 [29/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:25.391 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:25.391 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:25.391 [32/265] Linking target lib/librte_telemetry.so.24.0 00:13:25.391 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:25.391 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:25.650 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:13:25.650 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:25.650 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:25.909 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:25.909 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:25.909 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:25.909 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:26.167 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:26.167 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:26.167 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:26.167 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:26.424 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:26.682 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:26.682 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:26.682 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:26.940 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:26.940 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:27.198 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:27.198 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:27.198 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:27.198 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:27.455 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:27.455 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:27.455 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:27.455 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:27.714 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:27.714 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:27.714 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:27.714 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:27.972 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:27.972 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:28.230 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:28.230 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:28.230 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:28.489 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:28.757 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:28.757 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:28.757 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:28.757 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:28.757 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:28.757 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:28.757 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:28.757 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:29.325 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:29.325 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:29.325 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:29.325 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:29.325 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:29.582 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:29.582 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:29.840 [85/265] Linking static target lib/librte_eal.a 00:13:29.840 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:29.840 [87/265] Linking static target lib/librte_ring.a 00:13:29.840 [88/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:29.840 [89/265] Linking static target lib/librte_rcu.a 00:13:30.098 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:30.098 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:30.356 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:30.356 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:30.356 [94/265] Linking static target lib/librte_mempool.a 00:13:30.356 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:30.614 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:30.614 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:30.614 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:30.874 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:30.874 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:13:30.874 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:13:31.132 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:31.132 [103/265] Linking static target lib/librte_mbuf.a 00:13:31.393 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:31.393 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:31.652 [106/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:31.652 [107/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:31.652 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:31.652 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:31.652 [110/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:31.652 [111/265] Linking static target lib/librte_meter.a 00:13:31.910 [112/265] Linking static target lib/librte_net.a 00:13:31.910 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:32.168 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:32.168 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:32.168 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:32.168 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:32.426 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:32.426 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:32.991 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:32.991 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:32.991 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:33.249 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:33.249 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:33.249 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:33.249 [126/265] Linking static target lib/librte_pci.a 00:13:33.249 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:33.249 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:33.507 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:33.507 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:33.765 [131/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:33.765 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:33.765 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:33.765 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:33.765 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:33.765 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:33.765 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:33.765 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:34.024 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:34.024 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:34.024 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:34.024 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:34.024 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:34.024 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:34.024 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:34.282 [146/265] Linking static target lib/librte_ethdev.a 00:13:34.282 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:34.282 [148/265] Linking static target lib/librte_cmdline.a 00:13:34.540 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:34.540 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:34.798 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:34.798 [152/265] Linking static target lib/librte_timer.a 00:13:34.798 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:34.798 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:35.057 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:35.057 [156/265] Linking static target lib/librte_hash.a 00:13:35.057 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:35.057 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:35.315 [159/265] Linking static target lib/librte_compressdev.a 00:13:35.315 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:35.315 [161/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:35.315 [162/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:35.572 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:35.572 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:35.572 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:35.829 [166/265] Linking static target lib/librte_dmadev.a 00:13:35.829 [167/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:35.829 [168/265] Linking static target lib/librte_cryptodev.a 00:13:36.087 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:36.087 [170/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:36.087 [171/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:36.087 [172/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:36.087 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:36.087 [174/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:36.087 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:36.347 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:36.605 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:36.605 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:36.605 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:36.863 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:36.863 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:36.863 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:36.863 [183/265] Linking static target lib/librte_power.a 00:13:37.428 [184/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:37.428 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:37.428 [186/265] Linking static target lib/librte_security.a 00:13:37.428 [187/265] Linking static target lib/librte_reorder.a 00:13:37.428 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:37.428 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:37.685 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:37.942 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:37.942 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:38.200 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:38.200 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:38.200 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:38.459 [196/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:38.459 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:38.459 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:38.718 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:38.718 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:38.977 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:38.977 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:38.977 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:38.977 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:39.236 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:39.236 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:39.524 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:39.524 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:39.524 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:39.524 [210/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:39.524 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:39.524 [212/265] Linking static target drivers/librte_bus_pci.a 00:13:39.524 [213/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:39.524 [214/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:39.783 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:39.783 [216/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:39.783 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:39.783 [218/265] Linking static target drivers/librte_bus_vdev.a 00:13:39.783 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:39.783 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:39.783 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:39.783 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:39.783 [223/265] Linking static target drivers/librte_mempool_ring.a 00:13:39.783 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:40.042 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:40.977 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:40.977 [227/265] Linking static target lib/librte_vhost.a 00:13:41.544 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:41.544 [229/265] Linking target lib/librte_eal.so.24.0 00:13:41.803 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:13:41.803 [231/265] Linking target lib/librte_pci.so.24.0 00:13:41.803 [232/265] Linking target lib/librte_ring.so.24.0 00:13:41.803 [233/265] Linking target lib/librte_meter.so.24.0 00:13:41.803 [234/265] Linking target lib/librte_timer.so.24.0 00:13:41.803 [235/265] Linking target lib/librte_dmadev.so.24.0 00:13:41.803 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:13:41.803 [237/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:13:41.803 [238/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:13:41.803 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:13:41.803 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:13:41.803 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:13:42.062 [242/265] Linking target lib/librte_rcu.so.24.0 00:13:42.062 [243/265] Linking target lib/librte_mempool.so.24.0 00:13:42.062 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:13:42.062 [245/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:42.062 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:13:42.062 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:13:42.062 [248/265] Linking target lib/librte_mbuf.so.24.0 00:13:42.062 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:13:42.321 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:13:42.321 [251/265] Linking target lib/librte_compressdev.so.24.0 00:13:42.321 [252/265] Linking target lib/librte_net.so.24.0 00:13:42.321 [253/265] Linking target lib/librte_reorder.so.24.0 00:13:42.321 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:13:42.598 [255/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:13:42.598 [256/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:13:42.598 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:13:42.598 [258/265] Linking target lib/librte_hash.so.24.0 00:13:42.598 [259/265] Linking target lib/librte_cmdline.so.24.0 00:13:42.598 [260/265] Linking target lib/librte_ethdev.so.24.0 00:13:42.598 [261/265] Linking target lib/librte_security.so.24.0 00:13:42.598 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:13:42.867 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:13:42.867 [264/265] Linking target lib/librte_power.so.24.0 00:13:42.867 [265/265] Linking target lib/librte_vhost.so.24.0 00:13:42.867 INFO: autodetecting backend as ninja 00:13:42.867 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:44.244 CC lib/log/log_flags.o 00:13:44.244 CC lib/log/log.o 00:13:44.244 CC lib/log/log_deprecated.o 00:13:44.244 CC lib/ut/ut.o 00:13:44.244 CC lib/ut_mock/mock.o 00:13:44.244 LIB libspdk_ut_mock.a 00:13:44.244 LIB libspdk_ut.a 00:13:44.244 LIB libspdk_log.a 00:13:44.244 SO libspdk_ut_mock.so.6.0 00:13:44.244 SO libspdk_ut.so.2.0 00:13:44.244 SO libspdk_log.so.7.0 00:13:44.244 SYMLINK libspdk_ut_mock.so 00:13:44.244 SYMLINK libspdk_ut.so 00:13:44.244 SYMLINK libspdk_log.so 00:13:44.502 CC lib/dma/dma.o 00:13:44.502 CXX lib/trace_parser/trace.o 00:13:44.502 CC lib/util/base64.o 00:13:44.502 CC lib/ioat/ioat.o 00:13:44.502 CC lib/util/bit_array.o 00:13:44.502 CC lib/util/cpuset.o 00:13:44.502 CC lib/util/crc16.o 00:13:44.502 CC lib/util/crc32.o 00:13:44.502 CC lib/util/crc32c.o 00:13:44.502 CC lib/vfio_user/host/vfio_user_pci.o 00:13:44.761 CC lib/vfio_user/host/vfio_user.o 00:13:44.761 CC lib/util/crc32_ieee.o 00:13:44.761 LIB libspdk_dma.a 00:13:44.761 CC lib/util/crc64.o 00:13:44.761 SO libspdk_dma.so.4.0 00:13:44.761 CC lib/util/dif.o 00:13:44.761 CC lib/util/fd.o 00:13:44.761 SYMLINK libspdk_dma.so 00:13:44.761 CC lib/util/file.o 00:13:44.761 LIB libspdk_ioat.a 00:13:44.761 CC lib/util/hexlify.o 00:13:44.761 CC lib/util/iov.o 00:13:44.761 CC lib/util/math.o 00:13:44.761 SO libspdk_ioat.so.7.0 00:13:45.019 CC lib/util/pipe.o 00:13:45.019 SYMLINK libspdk_ioat.so 00:13:45.019 LIB libspdk_vfio_user.a 00:13:45.019 CC lib/util/strerror_tls.o 00:13:45.019 CC lib/util/string.o 00:13:45.019 CC lib/util/uuid.o 00:13:45.019 SO libspdk_vfio_user.so.5.0 00:13:45.019 CC lib/util/fd_group.o 00:13:45.019 SYMLINK libspdk_vfio_user.so 00:13:45.019 CC lib/util/xor.o 00:13:45.019 CC lib/util/zipf.o 00:13:45.277 LIB libspdk_util.a 00:13:45.536 SO libspdk_util.so.9.0 00:13:45.536 LIB libspdk_trace_parser.a 00:13:45.536 SO libspdk_trace_parser.so.5.0 00:13:45.536 SYMLINK libspdk_util.so 00:13:45.796 SYMLINK libspdk_trace_parser.so 00:13:45.796 CC lib/conf/conf.o 00:13:45.796 CC lib/env_dpdk/env.o 00:13:45.796 CC lib/env_dpdk/memory.o 00:13:45.796 CC lib/vmd/vmd.o 00:13:45.796 CC lib/env_dpdk/init.o 00:13:45.796 CC lib/vmd/led.o 00:13:45.796 CC lib/env_dpdk/pci.o 00:13:45.796 CC lib/idxd/idxd.o 00:13:45.796 CC lib/rdma/common.o 00:13:45.796 CC lib/json/json_parse.o 00:13:46.054 CC lib/idxd/idxd_user.o 00:13:46.054 LIB libspdk_conf.a 00:13:46.054 CC lib/json/json_util.o 00:13:46.054 SO libspdk_conf.so.6.0 00:13:46.054 CC lib/rdma/rdma_verbs.o 00:13:46.054 SYMLINK libspdk_conf.so 00:13:46.054 CC lib/json/json_write.o 00:13:46.054 CC lib/env_dpdk/threads.o 00:13:46.312 CC lib/env_dpdk/pci_ioat.o 00:13:46.312 CC lib/env_dpdk/pci_virtio.o 00:13:46.312 CC lib/env_dpdk/pci_vmd.o 00:13:46.312 LIB libspdk_rdma.a 00:13:46.312 CC lib/env_dpdk/pci_idxd.o 00:13:46.312 LIB libspdk_idxd.a 00:13:46.312 CC lib/env_dpdk/pci_event.o 00:13:46.312 SO libspdk_rdma.so.6.0 00:13:46.312 CC lib/env_dpdk/sigbus_handler.o 00:13:46.312 SO libspdk_idxd.so.12.0 00:13:46.312 SYMLINK libspdk_rdma.so 00:13:46.571 CC lib/env_dpdk/pci_dpdk.o 00:13:46.571 SYMLINK libspdk_idxd.so 00:13:46.571 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:46.571 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:46.571 LIB libspdk_vmd.a 00:13:46.571 LIB libspdk_json.a 00:13:46.571 SO libspdk_vmd.so.6.0 00:13:46.571 SO libspdk_json.so.6.0 00:13:46.571 SYMLINK libspdk_vmd.so 00:13:46.571 SYMLINK libspdk_json.so 00:13:46.830 CC lib/jsonrpc/jsonrpc_server.o 00:13:46.830 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:46.830 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:46.830 CC lib/jsonrpc/jsonrpc_client.o 00:13:47.089 LIB libspdk_jsonrpc.a 00:13:47.089 SO libspdk_jsonrpc.so.6.0 00:13:47.347 SYMLINK libspdk_jsonrpc.so 00:13:47.347 LIB libspdk_env_dpdk.a 00:13:47.347 SO libspdk_env_dpdk.so.14.0 00:13:47.605 CC lib/rpc/rpc.o 00:13:47.606 SYMLINK libspdk_env_dpdk.so 00:13:47.864 LIB libspdk_rpc.a 00:13:47.864 SO libspdk_rpc.so.6.0 00:13:47.864 SYMLINK libspdk_rpc.so 00:13:48.123 CC lib/trace/trace.o 00:13:48.123 CC lib/trace/trace_flags.o 00:13:48.123 CC lib/trace/trace_rpc.o 00:13:48.123 CC lib/notify/notify.o 00:13:48.123 CC lib/notify/notify_rpc.o 00:13:48.123 CC lib/keyring/keyring.o 00:13:48.123 CC lib/keyring/keyring_rpc.o 00:13:48.381 LIB libspdk_notify.a 00:13:48.381 LIB libspdk_trace.a 00:13:48.381 LIB libspdk_keyring.a 00:13:48.381 SO libspdk_notify.so.6.0 00:13:48.381 SO libspdk_trace.so.10.0 00:13:48.381 SO libspdk_keyring.so.1.0 00:13:48.638 SYMLINK libspdk_notify.so 00:13:48.638 SYMLINK libspdk_keyring.so 00:13:48.638 SYMLINK libspdk_trace.so 00:13:48.896 CC lib/sock/sock_rpc.o 00:13:48.896 CC lib/sock/sock.o 00:13:48.896 CC lib/thread/thread.o 00:13:48.896 CC lib/thread/iobuf.o 00:13:49.468 LIB libspdk_sock.a 00:13:49.468 SO libspdk_sock.so.9.0 00:13:49.468 SYMLINK libspdk_sock.so 00:13:49.727 CC lib/nvme/nvme_ctrlr.o 00:13:49.727 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:49.727 CC lib/nvme/nvme_fabric.o 00:13:49.727 CC lib/nvme/nvme_ns.o 00:13:49.727 CC lib/nvme/nvme_ns_cmd.o 00:13:49.727 CC lib/nvme/nvme_pcie_common.o 00:13:49.727 CC lib/nvme/nvme_qpair.o 00:13:49.727 CC lib/nvme/nvme_pcie.o 00:13:49.727 CC lib/nvme/nvme.o 00:13:50.663 LIB libspdk_thread.a 00:13:50.663 SO libspdk_thread.so.10.0 00:13:50.663 SYMLINK libspdk_thread.so 00:13:50.663 CC lib/nvme/nvme_quirks.o 00:13:50.663 CC lib/nvme/nvme_transport.o 00:13:50.663 CC lib/nvme/nvme_discovery.o 00:13:50.663 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:50.663 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:50.663 CC lib/nvme/nvme_tcp.o 00:13:50.922 CC lib/accel/accel.o 00:13:50.922 CC lib/blob/blobstore.o 00:13:50.922 CC lib/nvme/nvme_opal.o 00:13:51.181 CC lib/accel/accel_rpc.o 00:13:51.181 CC lib/accel/accel_sw.o 00:13:51.181 CC lib/blob/request.o 00:13:51.439 CC lib/nvme/nvme_io_msg.o 00:13:51.439 CC lib/nvme/nvme_poll_group.o 00:13:51.439 CC lib/nvme/nvme_zns.o 00:13:51.439 CC lib/init/json_config.o 00:13:51.439 CC lib/init/subsystem.o 00:13:51.699 CC lib/nvme/nvme_stubs.o 00:13:51.699 CC lib/nvme/nvme_auth.o 00:13:51.699 CC lib/init/subsystem_rpc.o 00:13:51.958 CC lib/init/rpc.o 00:13:51.958 LIB libspdk_accel.a 00:13:51.958 SO libspdk_accel.so.15.0 00:13:51.958 CC lib/nvme/nvme_cuse.o 00:13:52.217 SYMLINK libspdk_accel.so 00:13:52.217 LIB libspdk_init.a 00:13:52.217 CC lib/nvme/nvme_rdma.o 00:13:52.217 CC lib/blob/zeroes.o 00:13:52.217 SO libspdk_init.so.5.0 00:13:52.217 CC lib/blob/blob_bs_dev.o 00:13:52.217 SYMLINK libspdk_init.so 00:13:52.217 CC lib/virtio/virtio.o 00:13:52.217 CC lib/bdev/bdev.o 00:13:52.476 CC lib/bdev/bdev_rpc.o 00:13:52.476 CC lib/bdev/bdev_zone.o 00:13:52.476 CC lib/bdev/part.o 00:13:52.476 CC lib/event/app.o 00:13:52.736 CC lib/virtio/virtio_vhost_user.o 00:13:52.736 CC lib/virtio/virtio_vfio_user.o 00:13:52.736 CC lib/bdev/scsi_nvme.o 00:13:52.736 CC lib/event/reactor.o 00:13:52.736 CC lib/event/log_rpc.o 00:13:52.995 CC lib/event/app_rpc.o 00:13:52.995 CC lib/event/scheduler_static.o 00:13:52.995 CC lib/virtio/virtio_pci.o 00:13:53.254 LIB libspdk_event.a 00:13:53.254 SO libspdk_event.so.13.0 00:13:53.254 LIB libspdk_virtio.a 00:13:53.254 SYMLINK libspdk_event.so 00:13:53.254 SO libspdk_virtio.so.7.0 00:13:53.513 SYMLINK libspdk_virtio.so 00:13:53.513 LIB libspdk_nvme.a 00:13:53.772 SO libspdk_nvme.so.13.0 00:13:54.030 LIB libspdk_blob.a 00:13:54.030 SO libspdk_blob.so.11.0 00:13:54.030 SYMLINK libspdk_nvme.so 00:13:54.288 SYMLINK libspdk_blob.so 00:13:54.547 CC lib/blobfs/blobfs.o 00:13:54.547 CC lib/blobfs/tree.o 00:13:54.547 CC lib/lvol/lvol.o 00:13:55.114 LIB libspdk_bdev.a 00:13:55.114 SO libspdk_bdev.so.15.0 00:13:55.373 SYMLINK libspdk_bdev.so 00:13:55.373 LIB libspdk_blobfs.a 00:13:55.373 SO libspdk_blobfs.so.10.0 00:13:55.373 SYMLINK libspdk_blobfs.so 00:13:55.373 LIB libspdk_lvol.a 00:13:55.373 SO libspdk_lvol.so.10.0 00:13:55.631 CC lib/nvmf/ctrlr.o 00:13:55.631 CC lib/ublk/ublk.o 00:13:55.631 CC lib/ublk/ublk_rpc.o 00:13:55.631 CC lib/nvmf/ctrlr_discovery.o 00:13:55.631 CC lib/ftl/ftl_core.o 00:13:55.631 CC lib/scsi/dev.o 00:13:55.631 CC lib/scsi/lun.o 00:13:55.631 CC lib/ftl/ftl_init.o 00:13:55.631 CC lib/nbd/nbd.o 00:13:55.631 SYMLINK libspdk_lvol.so 00:13:55.631 CC lib/nbd/nbd_rpc.o 00:13:55.631 CC lib/scsi/port.o 00:13:55.631 CC lib/scsi/scsi.o 00:13:55.889 CC lib/scsi/scsi_bdev.o 00:13:55.889 CC lib/scsi/scsi_pr.o 00:13:55.889 CC lib/scsi/scsi_rpc.o 00:13:55.889 CC lib/nvmf/ctrlr_bdev.o 00:13:55.889 CC lib/ftl/ftl_layout.o 00:13:55.889 CC lib/scsi/task.o 00:13:55.889 LIB libspdk_nbd.a 00:13:55.889 CC lib/nvmf/subsystem.o 00:13:56.147 CC lib/nvmf/nvmf.o 00:13:56.147 SO libspdk_nbd.so.7.0 00:13:56.147 SYMLINK libspdk_nbd.so 00:13:56.147 CC lib/nvmf/nvmf_rpc.o 00:13:56.147 CC lib/nvmf/transport.o 00:13:56.147 CC lib/nvmf/tcp.o 00:13:56.147 LIB libspdk_ublk.a 00:13:56.147 CC lib/ftl/ftl_debug.o 00:13:56.147 SO libspdk_ublk.so.3.0 00:13:56.147 LIB libspdk_scsi.a 00:13:56.405 SYMLINK libspdk_ublk.so 00:13:56.405 CC lib/ftl/ftl_io.o 00:13:56.405 SO libspdk_scsi.so.9.0 00:13:56.405 SYMLINK libspdk_scsi.so 00:13:56.405 CC lib/ftl/ftl_sb.o 00:13:56.405 CC lib/ftl/ftl_l2p.o 00:13:56.664 CC lib/ftl/ftl_l2p_flat.o 00:13:56.664 CC lib/ftl/ftl_nv_cache.o 00:13:56.664 CC lib/ftl/ftl_band.o 00:13:56.664 CC lib/nvmf/rdma.o 00:13:56.664 CC lib/ftl/ftl_band_ops.o 00:13:56.922 CC lib/ftl/ftl_writer.o 00:13:56.922 CC lib/ftl/ftl_rq.o 00:13:56.922 CC lib/ftl/ftl_reloc.o 00:13:57.181 CC lib/ftl/ftl_l2p_cache.o 00:13:57.181 CC lib/ftl/ftl_p2l.o 00:13:57.181 CC lib/ftl/mngt/ftl_mngt.o 00:13:57.181 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:13:57.181 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:13:57.181 CC lib/iscsi/conn.o 00:13:57.439 CC lib/ftl/mngt/ftl_mngt_startup.o 00:13:57.439 CC lib/ftl/mngt/ftl_mngt_md.o 00:13:57.439 CC lib/iscsi/init_grp.o 00:13:57.439 CC lib/iscsi/iscsi.o 00:13:57.439 CC lib/ftl/mngt/ftl_mngt_misc.o 00:13:57.439 CC lib/vhost/vhost.o 00:13:57.698 CC lib/vhost/vhost_rpc.o 00:13:57.698 CC lib/vhost/vhost_scsi.o 00:13:57.698 CC lib/vhost/vhost_blk.o 00:13:57.698 CC lib/iscsi/md5.o 00:13:57.698 CC lib/iscsi/param.o 00:13:57.698 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:13:57.957 CC lib/iscsi/portal_grp.o 00:13:57.957 CC lib/iscsi/tgt_node.o 00:13:57.957 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:13:58.216 CC lib/iscsi/iscsi_subsystem.o 00:13:58.216 CC lib/iscsi/iscsi_rpc.o 00:13:58.216 CC lib/ftl/mngt/ftl_mngt_band.o 00:13:58.216 CC lib/vhost/rte_vhost_user.o 00:13:58.216 CC lib/iscsi/task.o 00:13:58.475 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:13:58.475 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:13:58.475 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:13:58.475 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:13:58.475 CC lib/ftl/utils/ftl_conf.o 00:13:58.734 CC lib/ftl/utils/ftl_md.o 00:13:58.734 CC lib/ftl/utils/ftl_mempool.o 00:13:58.734 LIB libspdk_nvmf.a 00:13:58.734 CC lib/ftl/utils/ftl_bitmap.o 00:13:58.734 CC lib/ftl/utils/ftl_property.o 00:13:58.734 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:13:58.734 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:13:58.994 SO libspdk_nvmf.so.18.0 00:13:58.994 LIB libspdk_iscsi.a 00:13:58.994 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:13:58.994 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:13:58.994 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:13:58.994 SO libspdk_iscsi.so.8.0 00:13:58.994 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:13:58.994 CC lib/ftl/upgrade/ftl_sb_v3.o 00:13:58.994 CC lib/ftl/upgrade/ftl_sb_v5.o 00:13:58.994 SYMLINK libspdk_nvmf.so 00:13:58.994 CC lib/ftl/nvc/ftl_nvc_dev.o 00:13:59.252 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:13:59.252 CC lib/ftl/base/ftl_base_dev.o 00:13:59.252 CC lib/ftl/base/ftl_base_bdev.o 00:13:59.252 CC lib/ftl/ftl_trace.o 00:13:59.252 SYMLINK libspdk_iscsi.so 00:13:59.510 LIB libspdk_vhost.a 00:13:59.510 SO libspdk_vhost.so.8.0 00:13:59.510 LIB libspdk_ftl.a 00:13:59.510 SYMLINK libspdk_vhost.so 00:13:59.769 SO libspdk_ftl.so.9.0 00:14:00.028 SYMLINK libspdk_ftl.so 00:14:00.596 CC module/env_dpdk/env_dpdk_rpc.o 00:14:00.596 CC module/accel/dsa/accel_dsa.o 00:14:00.596 CC module/accel/error/accel_error.o 00:14:00.596 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:00.596 CC module/blob/bdev/blob_bdev.o 00:14:00.596 CC module/scheduler/gscheduler/gscheduler.o 00:14:00.596 CC module/accel/ioat/accel_ioat.o 00:14:00.596 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:14:00.596 CC module/sock/posix/posix.o 00:14:00.596 CC module/keyring/file/keyring.o 00:14:00.596 LIB libspdk_env_dpdk_rpc.a 00:14:00.596 SO libspdk_env_dpdk_rpc.so.6.0 00:14:00.596 CC module/keyring/file/keyring_rpc.o 00:14:00.596 LIB libspdk_scheduler_dpdk_governor.a 00:14:00.854 SYMLINK libspdk_env_dpdk_rpc.so 00:14:00.855 LIB libspdk_scheduler_gscheduler.a 00:14:00.855 SO libspdk_scheduler_dpdk_governor.so.4.0 00:14:00.855 CC module/accel/ioat/accel_ioat_rpc.o 00:14:00.855 LIB libspdk_scheduler_dynamic.a 00:14:00.855 CC module/accel/error/accel_error_rpc.o 00:14:00.855 CC module/accel/dsa/accel_dsa_rpc.o 00:14:00.855 SO libspdk_scheduler_gscheduler.so.4.0 00:14:00.855 SO libspdk_scheduler_dynamic.so.4.0 00:14:00.855 SYMLINK libspdk_scheduler_dpdk_governor.so 00:14:00.855 LIB libspdk_blob_bdev.a 00:14:00.855 SYMLINK libspdk_scheduler_gscheduler.so 00:14:00.855 SYMLINK libspdk_scheduler_dynamic.so 00:14:00.855 LIB libspdk_keyring_file.a 00:14:00.855 SO libspdk_blob_bdev.so.11.0 00:14:00.855 SO libspdk_keyring_file.so.1.0 00:14:00.855 LIB libspdk_accel_ioat.a 00:14:00.855 LIB libspdk_accel_error.a 00:14:00.855 LIB libspdk_accel_dsa.a 00:14:00.855 SO libspdk_accel_ioat.so.6.0 00:14:00.855 SYMLINK libspdk_blob_bdev.so 00:14:00.855 SO libspdk_accel_error.so.2.0 00:14:00.855 SYMLINK libspdk_keyring_file.so 00:14:00.855 SO libspdk_accel_dsa.so.5.0 00:14:01.114 SYMLINK libspdk_accel_ioat.so 00:14:01.114 CC module/accel/iaa/accel_iaa_rpc.o 00:14:01.114 CC module/accel/iaa/accel_iaa.o 00:14:01.114 SYMLINK libspdk_accel_error.so 00:14:01.114 SYMLINK libspdk_accel_dsa.so 00:14:01.114 CC module/bdev/error/vbdev_error.o 00:14:01.114 CC module/bdev/malloc/bdev_malloc.o 00:14:01.114 CC module/bdev/gpt/gpt.o 00:14:01.114 CC module/bdev/delay/vbdev_delay.o 00:14:01.114 LIB libspdk_accel_iaa.a 00:14:01.373 CC module/bdev/null/bdev_null.o 00:14:01.373 CC module/blobfs/bdev/blobfs_bdev.o 00:14:01.373 CC module/bdev/lvol/vbdev_lvol.o 00:14:01.373 SO libspdk_accel_iaa.so.3.0 00:14:01.373 LIB libspdk_sock_posix.a 00:14:01.373 SO libspdk_sock_posix.so.6.0 00:14:01.374 CC module/bdev/nvme/bdev_nvme.o 00:14:01.374 SYMLINK libspdk_accel_iaa.so 00:14:01.374 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:01.374 CC module/bdev/gpt/vbdev_gpt.o 00:14:01.374 SYMLINK libspdk_sock_posix.so 00:14:01.374 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:01.374 CC module/bdev/null/bdev_null_rpc.o 00:14:01.632 CC module/bdev/error/vbdev_error_rpc.o 00:14:01.632 CC module/bdev/nvme/nvme_rpc.o 00:14:01.632 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:01.632 LIB libspdk_bdev_null.a 00:14:01.632 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:01.632 LIB libspdk_blobfs_bdev.a 00:14:01.632 SO libspdk_bdev_null.so.6.0 00:14:01.632 LIB libspdk_bdev_gpt.a 00:14:01.632 SO libspdk_blobfs_bdev.so.6.0 00:14:01.890 SYMLINK libspdk_bdev_null.so 00:14:01.890 SO libspdk_bdev_gpt.so.6.0 00:14:01.890 LIB libspdk_bdev_error.a 00:14:01.890 CC module/bdev/nvme/bdev_mdns_client.o 00:14:01.890 LIB libspdk_bdev_delay.a 00:14:01.890 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:01.890 SO libspdk_bdev_error.so.6.0 00:14:01.890 SO libspdk_bdev_delay.so.6.0 00:14:01.890 SYMLINK libspdk_blobfs_bdev.so 00:14:01.890 LIB libspdk_bdev_malloc.a 00:14:01.890 SYMLINK libspdk_bdev_gpt.so 00:14:01.890 CC module/bdev/nvme/vbdev_opal.o 00:14:01.890 CC module/bdev/nvme/vbdev_opal_rpc.o 00:14:01.890 SO libspdk_bdev_malloc.so.6.0 00:14:01.890 SYMLINK libspdk_bdev_error.so 00:14:01.890 SYMLINK libspdk_bdev_delay.so 00:14:01.890 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:14:01.890 SYMLINK libspdk_bdev_malloc.so 00:14:01.890 CC module/bdev/passthru/vbdev_passthru.o 00:14:02.149 CC module/bdev/raid/bdev_raid.o 00:14:02.149 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:02.149 LIB libspdk_bdev_lvol.a 00:14:02.149 CC module/bdev/split/vbdev_split.o 00:14:02.149 SO libspdk_bdev_lvol.so.6.0 00:14:02.149 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:02.149 SYMLINK libspdk_bdev_lvol.so 00:14:02.407 CC module/bdev/aio/bdev_aio.o 00:14:02.407 CC module/bdev/raid/bdev_raid_rpc.o 00:14:02.407 LIB libspdk_bdev_passthru.a 00:14:02.407 CC module/bdev/ftl/bdev_ftl.o 00:14:02.407 SO libspdk_bdev_passthru.so.6.0 00:14:02.407 CC module/bdev/iscsi/bdev_iscsi.o 00:14:02.407 CC module/bdev/split/vbdev_split_rpc.o 00:14:02.408 SYMLINK libspdk_bdev_passthru.so 00:14:02.408 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:14:02.408 CC module/bdev/virtio/bdev_virtio_scsi.o 00:14:02.408 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:02.408 CC module/bdev/raid/bdev_raid_sb.o 00:14:02.666 CC module/bdev/aio/bdev_aio_rpc.o 00:14:02.666 LIB libspdk_bdev_split.a 00:14:02.666 CC module/bdev/ftl/bdev_ftl_rpc.o 00:14:02.666 CC module/bdev/raid/raid0.o 00:14:02.666 SO libspdk_bdev_split.so.6.0 00:14:02.666 LIB libspdk_bdev_zone_block.a 00:14:02.666 SO libspdk_bdev_zone_block.so.6.0 00:14:02.666 SYMLINK libspdk_bdev_split.so 00:14:02.666 CC module/bdev/raid/raid1.o 00:14:02.666 LIB libspdk_bdev_iscsi.a 00:14:02.666 SO libspdk_bdev_iscsi.so.6.0 00:14:02.666 LIB libspdk_bdev_aio.a 00:14:02.666 SYMLINK libspdk_bdev_zone_block.so 00:14:02.666 CC module/bdev/virtio/bdev_virtio_blk.o 00:14:02.925 CC module/bdev/virtio/bdev_virtio_rpc.o 00:14:02.925 SO libspdk_bdev_aio.so.6.0 00:14:02.925 SYMLINK libspdk_bdev_iscsi.so 00:14:02.925 CC module/bdev/raid/concat.o 00:14:02.925 LIB libspdk_bdev_ftl.a 00:14:02.925 SYMLINK libspdk_bdev_aio.so 00:14:02.925 SO libspdk_bdev_ftl.so.6.0 00:14:02.925 SYMLINK libspdk_bdev_ftl.so 00:14:03.184 LIB libspdk_bdev_virtio.a 00:14:03.184 LIB libspdk_bdev_raid.a 00:14:03.184 SO libspdk_bdev_virtio.so.6.0 00:14:03.184 SO libspdk_bdev_raid.so.6.0 00:14:03.184 SYMLINK libspdk_bdev_virtio.so 00:14:03.184 SYMLINK libspdk_bdev_raid.so 00:14:03.750 LIB libspdk_bdev_nvme.a 00:14:03.750 SO libspdk_bdev_nvme.so.7.0 00:14:03.750 SYMLINK libspdk_bdev_nvme.so 00:14:04.316 CC module/event/subsystems/vmd/vmd.o 00:14:04.316 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:04.316 CC module/event/subsystems/sock/sock.o 00:14:04.316 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:14:04.316 CC module/event/subsystems/scheduler/scheduler.o 00:14:04.316 CC module/event/subsystems/keyring/keyring.o 00:14:04.316 CC module/event/subsystems/iobuf/iobuf.o 00:14:04.316 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:04.574 LIB libspdk_event_keyring.a 00:14:04.574 LIB libspdk_event_sock.a 00:14:04.574 LIB libspdk_event_scheduler.a 00:14:04.574 SO libspdk_event_keyring.so.1.0 00:14:04.574 SO libspdk_event_sock.so.5.0 00:14:04.574 LIB libspdk_event_vmd.a 00:14:04.574 LIB libspdk_event_vhost_blk.a 00:14:04.574 SO libspdk_event_scheduler.so.4.0 00:14:04.574 LIB libspdk_event_iobuf.a 00:14:04.574 SO libspdk_event_vhost_blk.so.3.0 00:14:04.574 SO libspdk_event_vmd.so.6.0 00:14:04.574 SYMLINK libspdk_event_keyring.so 00:14:04.574 SYMLINK libspdk_event_sock.so 00:14:04.574 SYMLINK libspdk_event_scheduler.so 00:14:04.574 SO libspdk_event_iobuf.so.3.0 00:14:04.574 SYMLINK libspdk_event_vhost_blk.so 00:14:04.574 SYMLINK libspdk_event_vmd.so 00:14:04.833 SYMLINK libspdk_event_iobuf.so 00:14:05.091 CC module/event/subsystems/accel/accel.o 00:14:05.091 LIB libspdk_event_accel.a 00:14:05.091 SO libspdk_event_accel.so.6.0 00:14:05.350 SYMLINK libspdk_event_accel.so 00:14:05.609 CC module/event/subsystems/bdev/bdev.o 00:14:05.867 LIB libspdk_event_bdev.a 00:14:05.867 SO libspdk_event_bdev.so.6.0 00:14:05.867 SYMLINK libspdk_event_bdev.so 00:14:06.154 CC module/event/subsystems/nbd/nbd.o 00:14:06.154 CC module/event/subsystems/ublk/ublk.o 00:14:06.154 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:06.154 CC module/event/subsystems/scsi/scsi.o 00:14:06.154 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:06.410 LIB libspdk_event_nbd.a 00:14:06.410 LIB libspdk_event_ublk.a 00:14:06.410 LIB libspdk_event_scsi.a 00:14:06.410 SO libspdk_event_nbd.so.6.0 00:14:06.410 SO libspdk_event_ublk.so.3.0 00:14:06.410 SO libspdk_event_scsi.so.6.0 00:14:06.410 SYMLINK libspdk_event_nbd.so 00:14:06.410 SYMLINK libspdk_event_ublk.so 00:14:06.410 SYMLINK libspdk_event_scsi.so 00:14:06.410 LIB libspdk_event_nvmf.a 00:14:06.410 SO libspdk_event_nvmf.so.6.0 00:14:06.702 SYMLINK libspdk_event_nvmf.so 00:14:06.703 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:14:06.703 CC module/event/subsystems/iscsi/iscsi.o 00:14:06.960 LIB libspdk_event_vhost_scsi.a 00:14:06.960 LIB libspdk_event_iscsi.a 00:14:06.960 SO libspdk_event_vhost_scsi.so.3.0 00:14:06.960 SO libspdk_event_iscsi.so.6.0 00:14:06.960 SYMLINK libspdk_event_iscsi.so 00:14:06.961 SYMLINK libspdk_event_vhost_scsi.so 00:14:07.219 SO libspdk.so.6.0 00:14:07.219 SYMLINK libspdk.so 00:14:07.477 CXX app/trace/trace.o 00:14:07.477 CC examples/nvme/hello_world/hello_world.o 00:14:07.477 CC examples/ioat/perf/perf.o 00:14:07.477 CC examples/accel/perf/accel_perf.o 00:14:07.477 CC examples/blob/hello_world/hello_blob.o 00:14:07.477 CC test/blobfs/mkfs/mkfs.o 00:14:07.477 CC test/accel/dif/dif.o 00:14:07.477 CC test/app/bdev_svc/bdev_svc.o 00:14:07.477 CC test/bdev/bdevio/bdevio.o 00:14:07.477 CC examples/bdev/hello_world/hello_bdev.o 00:14:07.735 LINK ioat_perf 00:14:07.735 LINK mkfs 00:14:07.735 LINK hello_world 00:14:07.735 LINK bdev_svc 00:14:07.735 LINK hello_blob 00:14:07.735 LINK hello_bdev 00:14:07.735 LINK spdk_trace 00:14:07.992 LINK dif 00:14:07.992 LINK bdevio 00:14:07.992 CC examples/ioat/verify/verify.o 00:14:07.992 LINK accel_perf 00:14:07.992 CC examples/nvme/reconnect/reconnect.o 00:14:07.992 TEST_HEADER include/spdk/accel.h 00:14:07.992 TEST_HEADER include/spdk/accel_module.h 00:14:07.992 TEST_HEADER include/spdk/assert.h 00:14:07.992 TEST_HEADER include/spdk/barrier.h 00:14:07.992 TEST_HEADER include/spdk/base64.h 00:14:07.992 TEST_HEADER include/spdk/bdev.h 00:14:07.993 TEST_HEADER include/spdk/bdev_module.h 00:14:08.250 TEST_HEADER include/spdk/bdev_zone.h 00:14:08.250 TEST_HEADER include/spdk/bit_array.h 00:14:08.250 TEST_HEADER include/spdk/bit_pool.h 00:14:08.250 TEST_HEADER include/spdk/blob_bdev.h 00:14:08.250 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:08.250 TEST_HEADER include/spdk/blobfs.h 00:14:08.250 TEST_HEADER include/spdk/blob.h 00:14:08.250 TEST_HEADER include/spdk/conf.h 00:14:08.250 TEST_HEADER include/spdk/config.h 00:14:08.250 TEST_HEADER include/spdk/cpuset.h 00:14:08.250 TEST_HEADER include/spdk/crc16.h 00:14:08.250 TEST_HEADER include/spdk/crc32.h 00:14:08.250 TEST_HEADER include/spdk/crc64.h 00:14:08.250 TEST_HEADER include/spdk/dif.h 00:14:08.250 TEST_HEADER include/spdk/dma.h 00:14:08.250 CC examples/blob/cli/blobcli.o 00:14:08.250 TEST_HEADER include/spdk/endian.h 00:14:08.250 TEST_HEADER include/spdk/env_dpdk.h 00:14:08.250 CC app/trace_record/trace_record.o 00:14:08.250 TEST_HEADER include/spdk/env.h 00:14:08.250 TEST_HEADER include/spdk/event.h 00:14:08.250 TEST_HEADER include/spdk/fd_group.h 00:14:08.250 TEST_HEADER include/spdk/fd.h 00:14:08.250 TEST_HEADER include/spdk/file.h 00:14:08.250 TEST_HEADER include/spdk/ftl.h 00:14:08.250 TEST_HEADER include/spdk/gpt_spec.h 00:14:08.250 TEST_HEADER include/spdk/hexlify.h 00:14:08.250 LINK verify 00:14:08.250 TEST_HEADER include/spdk/histogram_data.h 00:14:08.250 TEST_HEADER include/spdk/idxd.h 00:14:08.250 TEST_HEADER include/spdk/idxd_spec.h 00:14:08.250 TEST_HEADER include/spdk/init.h 00:14:08.250 TEST_HEADER include/spdk/ioat.h 00:14:08.250 TEST_HEADER include/spdk/ioat_spec.h 00:14:08.250 TEST_HEADER include/spdk/iscsi_spec.h 00:14:08.250 TEST_HEADER include/spdk/json.h 00:14:08.250 TEST_HEADER include/spdk/jsonrpc.h 00:14:08.250 TEST_HEADER include/spdk/keyring.h 00:14:08.250 TEST_HEADER include/spdk/keyring_module.h 00:14:08.250 TEST_HEADER include/spdk/likely.h 00:14:08.250 TEST_HEADER include/spdk/log.h 00:14:08.250 TEST_HEADER include/spdk/lvol.h 00:14:08.250 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:08.250 TEST_HEADER include/spdk/memory.h 00:14:08.250 TEST_HEADER include/spdk/mmio.h 00:14:08.250 TEST_HEADER include/spdk/nbd.h 00:14:08.250 CC examples/bdev/bdevperf/bdevperf.o 00:14:08.250 TEST_HEADER include/spdk/notify.h 00:14:08.250 TEST_HEADER include/spdk/nvme.h 00:14:08.250 TEST_HEADER include/spdk/nvme_intel.h 00:14:08.250 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:08.250 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:08.250 TEST_HEADER include/spdk/nvme_spec.h 00:14:08.250 TEST_HEADER include/spdk/nvme_zns.h 00:14:08.250 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:08.250 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:08.250 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:08.250 TEST_HEADER include/spdk/nvmf.h 00:14:08.250 TEST_HEADER include/spdk/nvmf_spec.h 00:14:08.250 TEST_HEADER include/spdk/nvmf_transport.h 00:14:08.250 TEST_HEADER include/spdk/opal.h 00:14:08.250 TEST_HEADER include/spdk/opal_spec.h 00:14:08.250 TEST_HEADER include/spdk/pci_ids.h 00:14:08.250 TEST_HEADER include/spdk/pipe.h 00:14:08.250 TEST_HEADER include/spdk/queue.h 00:14:08.250 TEST_HEADER include/spdk/reduce.h 00:14:08.250 TEST_HEADER include/spdk/rpc.h 00:14:08.250 TEST_HEADER include/spdk/scheduler.h 00:14:08.250 TEST_HEADER include/spdk/scsi.h 00:14:08.250 TEST_HEADER include/spdk/scsi_spec.h 00:14:08.250 TEST_HEADER include/spdk/sock.h 00:14:08.250 TEST_HEADER include/spdk/stdinc.h 00:14:08.250 TEST_HEADER include/spdk/string.h 00:14:08.250 TEST_HEADER include/spdk/thread.h 00:14:08.250 TEST_HEADER include/spdk/trace.h 00:14:08.250 TEST_HEADER include/spdk/trace_parser.h 00:14:08.250 TEST_HEADER include/spdk/tree.h 00:14:08.250 TEST_HEADER include/spdk/ublk.h 00:14:08.250 TEST_HEADER include/spdk/util.h 00:14:08.250 TEST_HEADER include/spdk/uuid.h 00:14:08.250 TEST_HEADER include/spdk/version.h 00:14:08.250 CC app/nvmf_tgt/nvmf_main.o 00:14:08.250 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:08.250 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:08.250 TEST_HEADER include/spdk/vhost.h 00:14:08.250 TEST_HEADER include/spdk/vmd.h 00:14:08.250 CC app/iscsi_tgt/iscsi_tgt.o 00:14:08.250 TEST_HEADER include/spdk/xor.h 00:14:08.250 TEST_HEADER include/spdk/zipf.h 00:14:08.250 CXX test/cpp_headers/accel.o 00:14:08.508 LINK reconnect 00:14:08.508 LINK nvmf_tgt 00:14:08.508 CC app/spdk_tgt/spdk_tgt.o 00:14:08.508 CXX test/cpp_headers/accel_module.o 00:14:08.508 LINK iscsi_tgt 00:14:08.508 LINK spdk_trace_record 00:14:08.508 LINK nvme_fuzz 00:14:08.765 LINK blobcli 00:14:08.765 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:08.765 CXX test/cpp_headers/assert.o 00:14:08.765 LINK spdk_tgt 00:14:08.765 CC test/app/histogram_perf/histogram_perf.o 00:14:08.765 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:14:08.765 CC test/app/jsoncat/jsoncat.o 00:14:08.765 CXX test/cpp_headers/barrier.o 00:14:09.022 LINK histogram_perf 00:14:09.022 CC test/dma/test_dma/test_dma.o 00:14:09.022 LINK bdevperf 00:14:09.022 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:14:09.022 LINK jsoncat 00:14:09.022 CC examples/sock/hello_world/hello_sock.o 00:14:09.022 CC app/spdk_lspci/spdk_lspci.o 00:14:09.022 CXX test/cpp_headers/base64.o 00:14:09.022 LINK nvme_manage 00:14:09.280 CC app/spdk_nvme_perf/perf.o 00:14:09.280 LINK spdk_lspci 00:14:09.280 CXX test/cpp_headers/bdev.o 00:14:09.280 CC app/spdk_nvme_identify/identify.o 00:14:09.280 LINK hello_sock 00:14:09.280 LINK test_dma 00:14:09.280 CC app/spdk_nvme_discover/discovery_aer.o 00:14:09.280 CC examples/nvme/arbitration/arbitration.o 00:14:09.539 LINK vhost_fuzz 00:14:09.539 CC app/spdk_top/spdk_top.o 00:14:09.539 CXX test/cpp_headers/bdev_module.o 00:14:09.539 LINK spdk_nvme_discover 00:14:09.539 CC test/env/vtophys/vtophys.o 00:14:09.539 CXX test/cpp_headers/bdev_zone.o 00:14:09.798 CC test/env/mem_callbacks/mem_callbacks.o 00:14:09.798 CC test/event/event_perf/event_perf.o 00:14:09.798 LINK arbitration 00:14:09.798 LINK vtophys 00:14:09.798 CC test/event/reactor/reactor.o 00:14:09.798 CXX test/cpp_headers/bit_array.o 00:14:09.798 LINK event_perf 00:14:10.056 LINK iscsi_fuzz 00:14:10.056 LINK reactor 00:14:10.056 LINK spdk_nvme_perf 00:14:10.056 CXX test/cpp_headers/bit_pool.o 00:14:10.056 CC examples/nvme/hotplug/hotplug.o 00:14:10.056 CC test/event/reactor_perf/reactor_perf.o 00:14:10.056 CXX test/cpp_headers/blob_bdev.o 00:14:10.056 LINK spdk_nvme_identify 00:14:10.056 CXX test/cpp_headers/blobfs_bdev.o 00:14:10.056 CXX test/cpp_headers/blobfs.o 00:14:10.314 LINK reactor_perf 00:14:10.314 CXX test/cpp_headers/blob.o 00:14:10.314 CC test/app/stub/stub.o 00:14:10.314 LINK hotplug 00:14:10.314 LINK spdk_top 00:14:10.314 CC app/vhost/vhost.o 00:14:10.314 LINK mem_callbacks 00:14:10.314 CC app/spdk_dd/spdk_dd.o 00:14:10.314 CXX test/cpp_headers/conf.o 00:14:10.573 LINK stub 00:14:10.573 CXX test/cpp_headers/config.o 00:14:10.573 CC test/event/app_repeat/app_repeat.o 00:14:10.573 CC test/event/scheduler/scheduler.o 00:14:10.573 CXX test/cpp_headers/cpuset.o 00:14:10.573 LINK vhost 00:14:10.573 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:10.573 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:10.573 CXX test/cpp_headers/crc16.o 00:14:10.573 LINK app_repeat 00:14:10.573 CC test/lvol/esnap/esnap.o 00:14:10.832 LINK scheduler 00:14:10.832 LINK cmb_copy 00:14:10.832 LINK spdk_dd 00:14:10.832 CC test/rpc_client/rpc_client_test.o 00:14:10.832 LINK env_dpdk_post_init 00:14:10.832 CXX test/cpp_headers/crc32.o 00:14:10.832 CC test/nvme/aer/aer.o 00:14:11.090 CC test/thread/poller_perf/poller_perf.o 00:14:11.090 CXX test/cpp_headers/crc64.o 00:14:11.090 CXX test/cpp_headers/dif.o 00:14:11.090 CXX test/cpp_headers/dma.o 00:14:11.090 CC app/fio/nvme/fio_plugin.o 00:14:11.090 LINK rpc_client_test 00:14:11.090 CC test/env/memory/memory_ut.o 00:14:11.090 CC examples/nvme/abort/abort.o 00:14:11.090 LINK aer 00:14:11.090 LINK poller_perf 00:14:11.090 CXX test/cpp_headers/endian.o 00:14:11.090 CXX test/cpp_headers/env_dpdk.o 00:14:11.348 CC test/env/pci/pci_ut.o 00:14:11.348 CC app/fio/bdev/fio_plugin.o 00:14:11.348 CXX test/cpp_headers/env.o 00:14:11.348 CC test/nvme/reset/reset.o 00:14:11.348 CC test/nvme/sgl/sgl.o 00:14:11.348 LINK abort 00:14:11.606 CXX test/cpp_headers/event.o 00:14:11.606 CC examples/vmd/lsvmd/lsvmd.o 00:14:11.606 LINK spdk_nvme 00:14:11.606 LINK reset 00:14:11.606 LINK pci_ut 00:14:11.606 CXX test/cpp_headers/fd_group.o 00:14:11.606 LINK sgl 00:14:11.606 LINK lsvmd 00:14:11.606 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:11.873 LINK spdk_bdev 00:14:11.873 CXX test/cpp_headers/fd.o 00:14:11.873 LINK pmr_persistence 00:14:11.873 CXX test/cpp_headers/file.o 00:14:11.873 CC test/nvme/e2edp/nvme_dp.o 00:14:11.873 CC examples/vmd/led/led.o 00:14:11.873 CC examples/nvmf/nvmf/nvmf.o 00:14:11.873 LINK memory_ut 00:14:12.133 CXX test/cpp_headers/ftl.o 00:14:12.133 CC examples/util/zipf/zipf.o 00:14:12.133 CC test/nvme/overhead/overhead.o 00:14:12.133 LINK led 00:14:12.133 LINK zipf 00:14:12.133 LINK nvme_dp 00:14:12.133 CXX test/cpp_headers/gpt_spec.o 00:14:12.391 LINK nvmf 00:14:12.391 CC examples/idxd/perf/perf.o 00:14:12.391 CC examples/thread/thread/thread_ex.o 00:14:12.391 LINK overhead 00:14:12.391 CXX test/cpp_headers/hexlify.o 00:14:12.391 CXX test/cpp_headers/histogram_data.o 00:14:12.649 CC test/nvme/err_injection/err_injection.o 00:14:12.649 CC examples/interrupt_tgt/interrupt_tgt.o 00:14:12.649 CXX test/cpp_headers/idxd.o 00:14:12.649 CXX test/cpp_headers/idxd_spec.o 00:14:12.649 CC test/nvme/startup/startup.o 00:14:12.649 LINK thread 00:14:12.649 CC test/nvme/reserve/reserve.o 00:14:12.649 LINK idxd_perf 00:14:12.649 LINK err_injection 00:14:12.649 LINK interrupt_tgt 00:14:12.649 CXX test/cpp_headers/init.o 00:14:12.907 LINK startup 00:14:12.907 LINK reserve 00:14:12.907 CC test/nvme/simple_copy/simple_copy.o 00:14:12.907 CXX test/cpp_headers/ioat.o 00:14:12.907 CXX test/cpp_headers/ioat_spec.o 00:14:12.907 CC test/nvme/connect_stress/connect_stress.o 00:14:12.907 CC test/nvme/boot_partition/boot_partition.o 00:14:12.907 CC test/nvme/compliance/nvme_compliance.o 00:14:13.165 CC test/nvme/fused_ordering/fused_ordering.o 00:14:13.165 CXX test/cpp_headers/iscsi_spec.o 00:14:13.165 LINK simple_copy 00:14:13.165 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:13.165 LINK connect_stress 00:14:13.165 LINK boot_partition 00:14:13.165 CC test/nvme/fdp/fdp.o 00:14:13.165 CXX test/cpp_headers/json.o 00:14:13.165 LINK fused_ordering 00:14:13.423 CXX test/cpp_headers/jsonrpc.o 00:14:13.423 CXX test/cpp_headers/keyring.o 00:14:13.423 LINK nvme_compliance 00:14:13.423 LINK doorbell_aers 00:14:13.423 CC test/nvme/cuse/cuse.o 00:14:13.423 CXX test/cpp_headers/keyring_module.o 00:14:13.423 CXX test/cpp_headers/likely.o 00:14:13.681 CXX test/cpp_headers/log.o 00:14:13.681 LINK fdp 00:14:13.681 CXX test/cpp_headers/lvol.o 00:14:13.681 CXX test/cpp_headers/mmio.o 00:14:13.681 CXX test/cpp_headers/memory.o 00:14:13.681 CXX test/cpp_headers/nbd.o 00:14:13.681 CXX test/cpp_headers/notify.o 00:14:13.681 CXX test/cpp_headers/nvme.o 00:14:13.681 CXX test/cpp_headers/nvme_intel.o 00:14:13.940 CXX test/cpp_headers/nvme_ocssd.o 00:14:13.940 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:13.940 CXX test/cpp_headers/nvme_spec.o 00:14:13.940 CXX test/cpp_headers/nvme_zns.o 00:14:13.940 CXX test/cpp_headers/nvmf_cmd.o 00:14:13.940 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:13.940 CXX test/cpp_headers/nvmf.o 00:14:13.940 CXX test/cpp_headers/nvmf_spec.o 00:14:13.940 CXX test/cpp_headers/nvmf_transport.o 00:14:13.940 CXX test/cpp_headers/opal.o 00:14:13.940 CXX test/cpp_headers/opal_spec.o 00:14:14.198 CXX test/cpp_headers/pci_ids.o 00:14:14.198 CXX test/cpp_headers/pipe.o 00:14:14.198 CXX test/cpp_headers/queue.o 00:14:14.198 CXX test/cpp_headers/reduce.o 00:14:14.198 CXX test/cpp_headers/rpc.o 00:14:14.198 CXX test/cpp_headers/scheduler.o 00:14:14.198 CXX test/cpp_headers/scsi.o 00:14:14.198 CXX test/cpp_headers/scsi_spec.o 00:14:14.468 CXX test/cpp_headers/sock.o 00:14:14.468 CXX test/cpp_headers/stdinc.o 00:14:14.468 CXX test/cpp_headers/string.o 00:14:14.468 CXX test/cpp_headers/thread.o 00:14:14.468 CXX test/cpp_headers/trace.o 00:14:14.468 CXX test/cpp_headers/trace_parser.o 00:14:14.468 CXX test/cpp_headers/tree.o 00:14:14.468 CXX test/cpp_headers/ublk.o 00:14:14.468 CXX test/cpp_headers/util.o 00:14:14.468 CXX test/cpp_headers/uuid.o 00:14:14.468 CXX test/cpp_headers/version.o 00:14:14.738 CXX test/cpp_headers/vfio_user_pci.o 00:14:14.738 CXX test/cpp_headers/vfio_user_spec.o 00:14:14.738 CXX test/cpp_headers/vhost.o 00:14:14.738 CXX test/cpp_headers/vmd.o 00:14:14.738 LINK cuse 00:14:14.738 CXX test/cpp_headers/xor.o 00:14:14.738 CXX test/cpp_headers/zipf.o 00:14:15.306 LINK esnap 00:14:20.575 00:14:20.575 real 1m10.588s 00:14:20.575 user 7m11.367s 00:14:20.575 sys 1m46.772s 00:14:20.575 13:24:36 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:14:20.575 13:24:36 -- common/autotest_common.sh@10 -- $ set +x 00:14:20.575 ************************************ 00:14:20.575 END TEST make 00:14:20.575 ************************************ 00:14:20.575 13:24:37 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:20.575 13:24:37 -- pm/common@30 -- $ signal_monitor_resources TERM 00:14:20.575 13:24:37 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:14:20.575 13:24:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:20.575 13:24:37 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:20.575 13:24:37 -- pm/common@45 -- $ pid=5292 00:14:20.575 13:24:37 -- pm/common@52 -- $ sudo kill -TERM 5292 00:14:20.575 13:24:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:20.575 13:24:37 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:20.575 13:24:37 -- pm/common@45 -- $ pid=5293 00:14:20.575 13:24:37 -- pm/common@52 -- $ sudo kill -TERM 5293 00:14:20.575 13:24:37 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.575 13:24:37 -- nvmf/common.sh@7 -- # uname -s 00:14:20.575 13:24:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.575 13:24:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.575 13:24:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.575 13:24:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.575 13:24:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.575 13:24:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.575 13:24:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.575 13:24:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.575 13:24:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.575 13:24:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.575 13:24:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:14:20.575 13:24:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:14:20.575 13:24:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.575 13:24:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.575 13:24:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.575 13:24:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.575 13:24:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.575 13:24:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.575 13:24:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.575 13:24:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.575 13:24:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.575 13:24:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.575 13:24:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.575 13:24:37 -- paths/export.sh@5 -- # export PATH 00:14:20.575 13:24:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.575 13:24:37 -- nvmf/common.sh@47 -- # : 0 00:14:20.575 13:24:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.575 13:24:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.575 13:24:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.575 13:24:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.575 13:24:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.575 13:24:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.575 13:24:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.575 13:24:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.575 13:24:37 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:20.575 13:24:37 -- spdk/autotest.sh@32 -- # uname -s 00:14:20.575 13:24:37 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:20.575 13:24:37 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:14:20.575 13:24:37 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:20.575 13:24:37 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:14:20.575 13:24:37 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:20.575 13:24:37 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:20.575 13:24:37 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:20.575 13:24:37 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:14:20.575 13:24:37 -- spdk/autotest.sh@48 -- # udevadm_pid=54081 00:14:20.575 13:24:37 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:14:20.575 13:24:37 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:20.575 13:24:37 -- pm/common@17 -- # local monitor 00:14:20.575 13:24:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:20.575 13:24:37 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54084 00:14:20.575 13:24:37 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:20.575 13:24:37 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54087 00:14:20.575 13:24:37 -- pm/common@26 -- # sleep 1 00:14:20.575 13:24:37 -- pm/common@21 -- # date +%s 00:14:20.575 13:24:37 -- pm/common@21 -- # date +%s 00:14:20.575 13:24:37 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714137877 00:14:20.575 13:24:37 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714137877 00:14:20.575 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714137877_collect-vmstat.pm.log 00:14:20.575 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714137877_collect-cpu-load.pm.log 00:14:20.835 13:24:38 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:20.835 13:24:38 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:20.835 13:24:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:20.835 13:24:38 -- common/autotest_common.sh@10 -- # set +x 00:14:20.835 13:24:38 -- spdk/autotest.sh@59 -- # create_test_list 00:14:20.835 13:24:38 -- common/autotest_common.sh@734 -- # xtrace_disable 00:14:20.835 13:24:38 -- common/autotest_common.sh@10 -- # set +x 00:14:21.095 13:24:38 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:21.095 13:24:38 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:21.095 13:24:38 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:21.095 13:24:38 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:21.095 13:24:38 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:21.095 13:24:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:21.095 13:24:38 -- common/autotest_common.sh@1441 -- # uname 00:14:21.095 13:24:38 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:14:21.095 13:24:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:21.095 13:24:38 -- common/autotest_common.sh@1461 -- # uname 00:14:21.095 13:24:38 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:14:21.095 13:24:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:14:21.095 13:24:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:14:21.095 13:24:38 -- spdk/autotest.sh@72 -- # hash lcov 00:14:21.095 13:24:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:14:21.095 13:24:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:14:21.095 --rc lcov_branch_coverage=1 00:14:21.095 --rc lcov_function_coverage=1 00:14:21.095 --rc genhtml_branch_coverage=1 00:14:21.095 --rc genhtml_function_coverage=1 00:14:21.095 --rc genhtml_legend=1 00:14:21.095 --rc geninfo_all_blocks=1 00:14:21.095 ' 00:14:21.095 13:24:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:14:21.095 --rc lcov_branch_coverage=1 00:14:21.095 --rc lcov_function_coverage=1 00:14:21.095 --rc genhtml_branch_coverage=1 00:14:21.095 --rc genhtml_function_coverage=1 00:14:21.095 --rc genhtml_legend=1 00:14:21.095 --rc geninfo_all_blocks=1 00:14:21.095 ' 00:14:21.095 13:24:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:14:21.095 --rc lcov_branch_coverage=1 00:14:21.095 --rc lcov_function_coverage=1 00:14:21.095 --rc genhtml_branch_coverage=1 00:14:21.095 --rc genhtml_function_coverage=1 00:14:21.095 --rc genhtml_legend=1 00:14:21.095 --rc geninfo_all_blocks=1 00:14:21.095 --no-external' 00:14:21.095 13:24:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:14:21.095 --rc lcov_branch_coverage=1 00:14:21.095 --rc lcov_function_coverage=1 00:14:21.095 --rc genhtml_branch_coverage=1 00:14:21.095 --rc genhtml_function_coverage=1 00:14:21.095 --rc genhtml_legend=1 00:14:21.095 --rc geninfo_all_blocks=1 00:14:21.095 --no-external' 00:14:21.095 13:24:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:14:21.095 lcov: LCOV version 1.14 00:14:21.095 13:24:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:29.237 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:14:29.237 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:14:29.237 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:14:29.237 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:14:29.237 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:14:29.237 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:14:35.852 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:35.852 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:14:48.114 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:14:48.114 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:14:48.115 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:14:48.115 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:14:48.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:14:48.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:14:51.405 13:25:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:14:51.405 13:25:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:51.405 13:25:08 -- common/autotest_common.sh@10 -- # set +x 00:14:51.405 13:25:08 -- spdk/autotest.sh@91 -- # rm -f 00:14:51.405 13:25:08 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:51.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:51.973 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:51.973 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:51.973 13:25:09 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:14:51.973 13:25:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:51.973 13:25:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:51.973 13:25:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:51.973 13:25:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:51.973 13:25:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:51.973 13:25:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:51.973 13:25:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:51.973 13:25:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:51.973 13:25:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:51.973 13:25:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:51.973 13:25:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:51.973 13:25:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:51.973 13:25:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:51.973 13:25:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:51.973 13:25:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:14:51.973 13:25:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:14:51.973 13:25:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:51.973 13:25:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:51.973 13:25:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:51.973 13:25:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:14:51.973 13:25:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:14:51.973 13:25:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:51.973 13:25:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:51.973 13:25:09 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:14:51.973 13:25:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:51.973 13:25:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:51.973 13:25:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:14:51.973 13:25:09 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:14:51.973 13:25:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:52.232 No valid GPT data, bailing 00:14:52.232 13:25:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:52.232 13:25:09 -- scripts/common.sh@391 -- # pt= 00:14:52.232 13:25:09 -- scripts/common.sh@392 -- # return 1 00:14:52.232 13:25:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:52.232 1+0 records in 00:14:52.232 1+0 records out 00:14:52.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557428 s, 188 MB/s 00:14:52.232 13:25:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:52.232 13:25:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:52.232 13:25:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:14:52.232 13:25:09 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:14:52.232 13:25:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:52.232 No valid GPT data, bailing 00:14:52.232 13:25:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:52.232 13:25:09 -- scripts/common.sh@391 -- # pt= 00:14:52.232 13:25:09 -- scripts/common.sh@392 -- # return 1 00:14:52.232 13:25:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:52.232 1+0 records in 00:14:52.232 1+0 records out 00:14:52.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509494 s, 206 MB/s 00:14:52.232 13:25:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:52.232 13:25:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:52.232 13:25:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:14:52.232 13:25:09 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:14:52.232 13:25:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:14:52.232 No valid GPT data, bailing 00:14:52.232 13:25:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:14:52.232 13:25:09 -- scripts/common.sh@391 -- # pt= 00:14:52.232 13:25:09 -- scripts/common.sh@392 -- # return 1 00:14:52.232 13:25:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:14:52.232 1+0 records in 00:14:52.232 1+0 records out 00:14:52.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461759 s, 227 MB/s 00:14:52.232 13:25:09 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:14:52.232 13:25:09 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:14:52.232 13:25:09 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:14:52.232 13:25:09 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:14:52.232 13:25:09 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:14:52.491 No valid GPT data, bailing 00:14:52.491 13:25:09 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:14:52.491 13:25:09 -- scripts/common.sh@391 -- # pt= 00:14:52.491 13:25:09 -- scripts/common.sh@392 -- # return 1 00:14:52.491 13:25:09 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:14:52.491 1+0 records in 00:14:52.491 1+0 records out 00:14:52.491 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479956 s, 218 MB/s 00:14:52.491 13:25:09 -- spdk/autotest.sh@118 -- # sync 00:14:52.491 13:25:09 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:52.491 13:25:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:52.491 13:25:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:54.398 13:25:11 -- spdk/autotest.sh@124 -- # uname -s 00:14:54.398 13:25:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:14:54.398 13:25:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:54.398 13:25:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:54.398 13:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.398 13:25:11 -- common/autotest_common.sh@10 -- # set +x 00:14:54.398 ************************************ 00:14:54.398 START TEST setup.sh 00:14:54.398 ************************************ 00:14:54.398 13:25:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:14:54.398 * Looking for test storage... 00:14:54.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:54.398 13:25:11 -- setup/test-setup.sh@10 -- # uname -s 00:14:54.398 13:25:11 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:14:54.398 13:25:11 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:54.398 13:25:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:54.398 13:25:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:54.398 13:25:11 -- common/autotest_common.sh@10 -- # set +x 00:14:54.657 ************************************ 00:14:54.657 START TEST acl 00:14:54.657 ************************************ 00:14:54.657 13:25:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:14:54.657 * Looking for test storage... 00:14:54.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:54.657 13:25:11 -- setup/acl.sh@10 -- # get_zoned_devs 00:14:54.657 13:25:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:54.657 13:25:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:54.657 13:25:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:54.658 13:25:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:54.658 13:25:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:54.658 13:25:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:54.658 13:25:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:54.658 13:25:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:54.658 13:25:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:54.658 13:25:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:54.658 13:25:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:54.658 13:25:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:54.658 13:25:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:54.658 13:25:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:54.658 13:25:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:14:54.658 13:25:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:14:54.658 13:25:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:54.658 13:25:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:54.658 13:25:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:54.658 13:25:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:14:54.658 13:25:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:14:54.658 13:25:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:54.658 13:25:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:54.658 13:25:11 -- setup/acl.sh@12 -- # devs=() 00:14:54.658 13:25:11 -- setup/acl.sh@12 -- # declare -a devs 00:14:54.658 13:25:11 -- setup/acl.sh@13 -- # drivers=() 00:14:54.658 13:25:11 -- setup/acl.sh@13 -- # declare -A drivers 00:14:54.658 13:25:11 -- setup/acl.sh@51 -- # setup reset 00:14:54.658 13:25:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:54.658 13:25:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:55.225 13:25:12 -- setup/acl.sh@52 -- # collect_setup_devs 00:14:55.225 13:25:12 -- setup/acl.sh@16 -- # local dev driver 00:14:55.225 13:25:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:55.225 13:25:12 -- setup/acl.sh@15 -- # setup output status 00:14:55.225 13:25:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:55.225 13:25:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # continue 00:14:56.162 13:25:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:56.162 Hugepages 00:14:56.162 node hugesize free / total 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # continue 00:14:56.162 13:25:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:56.162 00:14:56.162 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # continue 00:14:56.162 13:25:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:14:56.162 13:25:13 -- setup/acl.sh@20 -- # continue 00:14:56.162 13:25:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:56.162 13:25:13 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:56.162 13:25:13 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:56.162 13:25:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:56.162 13:25:13 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:14:56.162 13:25:13 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:14:56.162 13:25:13 -- setup/acl.sh@22 -- # devs+=("$dev") 00:14:56.162 13:25:13 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:14:56.162 13:25:13 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:14:56.162 13:25:13 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:14:56.162 13:25:13 -- setup/acl.sh@54 -- # run_test denied denied 00:14:56.162 13:25:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:56.162 13:25:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:56.162 13:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:56.422 ************************************ 00:14:56.422 START TEST denied 00:14:56.422 ************************************ 00:14:56.422 13:25:13 -- common/autotest_common.sh@1111 -- # denied 00:14:56.422 13:25:13 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:14:56.422 13:25:13 -- setup/acl.sh@38 -- # setup output config 00:14:56.422 13:25:13 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:14:56.422 13:25:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:56.422 13:25:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:57.358 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:14:57.358 13:25:14 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:14:57.358 13:25:14 -- setup/acl.sh@28 -- # local dev driver 00:14:57.358 13:25:14 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:57.358 13:25:14 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:14:57.358 13:25:14 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:14:57.358 13:25:14 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:57.358 13:25:14 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:57.358 13:25:14 -- setup/acl.sh@41 -- # setup reset 00:14:57.358 13:25:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:57.358 13:25:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:57.924 00:14:57.924 real 0m1.443s 00:14:57.924 user 0m0.589s 00:14:57.924 sys 0m0.799s 00:14:57.924 13:25:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:57.924 13:25:15 -- common/autotest_common.sh@10 -- # set +x 00:14:57.924 ************************************ 00:14:57.924 END TEST denied 00:14:57.924 ************************************ 00:14:57.924 13:25:15 -- setup/acl.sh@55 -- # run_test allowed allowed 00:14:57.924 13:25:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:57.924 13:25:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:57.924 13:25:15 -- common/autotest_common.sh@10 -- # set +x 00:14:57.924 ************************************ 00:14:57.924 START TEST allowed 00:14:57.924 ************************************ 00:14:57.924 13:25:15 -- common/autotest_common.sh@1111 -- # allowed 00:14:57.924 13:25:15 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:14:57.924 13:25:15 -- setup/acl.sh@45 -- # setup output config 00:14:57.924 13:25:15 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:14:57.924 13:25:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:57.924 13:25:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:14:58.859 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:58.859 13:25:16 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:14:58.859 13:25:16 -- setup/acl.sh@28 -- # local dev driver 00:14:58.859 13:25:16 -- setup/acl.sh@30 -- # for dev in "$@" 00:14:58.859 13:25:16 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:14:58.859 13:25:16 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:14:58.859 13:25:16 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:14:58.859 13:25:16 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:14:58.859 13:25:16 -- setup/acl.sh@48 -- # setup reset 00:14:58.859 13:25:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:14:58.859 13:25:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:59.426 00:14:59.426 real 0m1.521s 00:14:59.426 user 0m0.694s 00:14:59.426 sys 0m0.825s 00:14:59.426 13:25:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.426 13:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:59.426 ************************************ 00:14:59.426 END TEST allowed 00:14:59.426 ************************************ 00:14:59.426 00:14:59.426 real 0m4.916s 00:14:59.426 user 0m2.143s 00:14:59.426 sys 0m2.692s 00:14:59.426 13:25:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.426 13:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:59.426 ************************************ 00:14:59.426 END TEST acl 00:14:59.426 ************************************ 00:14:59.426 13:25:16 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:59.426 13:25:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:59.426 13:25:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.426 13:25:16 -- common/autotest_common.sh@10 -- # set +x 00:14:59.686 ************************************ 00:14:59.686 START TEST hugepages 00:14:59.686 ************************************ 00:14:59.686 13:25:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:14:59.686 * Looking for test storage... 00:14:59.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:14:59.686 13:25:16 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:14:59.686 13:25:16 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:14:59.686 13:25:16 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:14:59.686 13:25:16 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:14:59.686 13:25:16 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:14:59.686 13:25:16 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:14:59.686 13:25:16 -- setup/common.sh@17 -- # local get=Hugepagesize 00:14:59.686 13:25:16 -- setup/common.sh@18 -- # local node= 00:14:59.686 13:25:16 -- setup/common.sh@19 -- # local var val 00:14:59.686 13:25:16 -- setup/common.sh@20 -- # local mem_f mem 00:14:59.686 13:25:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:14:59.686 13:25:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:14:59.686 13:25:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:14:59.686 13:25:16 -- setup/common.sh@28 -- # mapfile -t mem 00:14:59.686 13:25:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5444164 kB' 'MemAvailable: 7389552 kB' 'Buffers: 3456 kB' 'Cached: 2154184 kB' 'SwapCached: 0 kB' 'Active: 877372 kB' 'Inactive: 1387804 kB' 'Active(anon): 118024 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387804 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 109432 kB' 'Mapped: 48856 kB' 'Shmem: 10488 kB' 'KReclaimable: 70352 kB' 'Slab: 145664 kB' 'SReclaimable: 70352 kB' 'SUnreclaim: 75312 kB' 'KernelStack: 6440 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 341572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.686 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.686 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:16 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:16 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # continue 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:14:59.687 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:14:59.687 13:25:17 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:14:59.687 13:25:17 -- setup/common.sh@33 -- # echo 2048 00:14:59.687 13:25:17 -- setup/common.sh@33 -- # return 0 00:14:59.687 13:25:17 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:14:59.687 13:25:17 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:14:59.687 13:25:17 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:14:59.687 13:25:17 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:14:59.687 13:25:17 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:14:59.687 13:25:17 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:14:59.687 13:25:17 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:14:59.687 13:25:17 -- setup/hugepages.sh@207 -- # get_nodes 00:14:59.687 13:25:17 -- setup/hugepages.sh@27 -- # local node 00:14:59.687 13:25:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:14:59.687 13:25:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:14:59.687 13:25:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:14:59.687 13:25:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:14:59.687 13:25:17 -- setup/hugepages.sh@208 -- # clear_hp 00:14:59.687 13:25:17 -- setup/hugepages.sh@37 -- # local node hp 00:14:59.687 13:25:17 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:14:59.687 13:25:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:59.687 13:25:17 -- setup/hugepages.sh@41 -- # echo 0 00:14:59.687 13:25:17 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:14:59.687 13:25:17 -- setup/hugepages.sh@41 -- # echo 0 00:14:59.687 13:25:17 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:14:59.687 13:25:17 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:14:59.687 13:25:17 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:14:59.687 13:25:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:59.687 13:25:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:59.687 13:25:17 -- common/autotest_common.sh@10 -- # set +x 00:14:59.687 ************************************ 00:14:59.687 START TEST default_setup 00:14:59.687 ************************************ 00:14:59.687 13:25:17 -- common/autotest_common.sh@1111 -- # default_setup 00:14:59.687 13:25:17 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:14:59.687 13:25:17 -- setup/hugepages.sh@49 -- # local size=2097152 00:14:59.687 13:25:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:14:59.687 13:25:17 -- setup/hugepages.sh@51 -- # shift 00:14:59.687 13:25:17 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:14:59.687 13:25:17 -- setup/hugepages.sh@52 -- # local node_ids 00:14:59.687 13:25:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:14:59.687 13:25:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:14:59.687 13:25:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:14:59.687 13:25:17 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:14:59.687 13:25:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:14:59.687 13:25:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:14:59.687 13:25:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:14:59.687 13:25:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:14:59.687 13:25:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:14:59.687 13:25:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:14:59.687 13:25:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:14:59.687 13:25:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:14:59.687 13:25:17 -- setup/hugepages.sh@73 -- # return 0 00:14:59.687 13:25:17 -- setup/hugepages.sh@137 -- # setup output 00:14:59.687 13:25:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:14:59.687 13:25:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:00.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:00.513 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.513 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:00.513 13:25:17 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:15:00.513 13:25:17 -- setup/hugepages.sh@89 -- # local node 00:15:00.513 13:25:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:00.513 13:25:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:00.513 13:25:17 -- setup/hugepages.sh@92 -- # local surp 00:15:00.513 13:25:17 -- setup/hugepages.sh@93 -- # local resv 00:15:00.513 13:25:17 -- setup/hugepages.sh@94 -- # local anon 00:15:00.513 13:25:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:00.513 13:25:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:00.513 13:25:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:00.513 13:25:17 -- setup/common.sh@18 -- # local node= 00:15:00.513 13:25:17 -- setup/common.sh@19 -- # local var val 00:15:00.513 13:25:17 -- setup/common.sh@20 -- # local mem_f mem 00:15:00.513 13:25:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:00.513 13:25:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:00.513 13:25:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:00.513 13:25:17 -- setup/common.sh@28 -- # mapfile -t mem 00:15:00.513 13:25:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548860 kB' 'MemAvailable: 9494096 kB' 'Buffers: 3456 kB' 'Cached: 2154212 kB' 'SwapCached: 0 kB' 'Active: 894064 kB' 'Inactive: 1387844 kB' 'Active(anon): 134716 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 125804 kB' 'Mapped: 49004 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145284 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75316 kB' 'KernelStack: 6368 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.513 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.513 13:25:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:00.514 13:25:17 -- setup/common.sh@33 -- # echo 0 00:15:00.514 13:25:17 -- setup/common.sh@33 -- # return 0 00:15:00.514 13:25:17 -- setup/hugepages.sh@97 -- # anon=0 00:15:00.514 13:25:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:00.514 13:25:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:00.514 13:25:17 -- setup/common.sh@18 -- # local node= 00:15:00.514 13:25:17 -- setup/common.sh@19 -- # local var val 00:15:00.514 13:25:17 -- setup/common.sh@20 -- # local mem_f mem 00:15:00.514 13:25:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:00.514 13:25:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:00.514 13:25:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:00.514 13:25:17 -- setup/common.sh@28 -- # mapfile -t mem 00:15:00.514 13:25:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548860 kB' 'MemAvailable: 9494096 kB' 'Buffers: 3456 kB' 'Cached: 2154212 kB' 'SwapCached: 0 kB' 'Active: 894152 kB' 'Inactive: 1387844 kB' 'Active(anon): 134804 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 125928 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145284 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75316 kB' 'KernelStack: 6384 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.514 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.514 13:25:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.515 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.515 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.516 13:25:17 -- setup/common.sh@33 -- # echo 0 00:15:00.516 13:25:17 -- setup/common.sh@33 -- # return 0 00:15:00.516 13:25:17 -- setup/hugepages.sh@99 -- # surp=0 00:15:00.516 13:25:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:00.516 13:25:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:00.516 13:25:17 -- setup/common.sh@18 -- # local node= 00:15:00.516 13:25:17 -- setup/common.sh@19 -- # local var val 00:15:00.516 13:25:17 -- setup/common.sh@20 -- # local mem_f mem 00:15:00.516 13:25:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:00.516 13:25:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:00.516 13:25:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:00.516 13:25:17 -- setup/common.sh@28 -- # mapfile -t mem 00:15:00.516 13:25:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548860 kB' 'MemAvailable: 9494096 kB' 'Buffers: 3456 kB' 'Cached: 2154212 kB' 'SwapCached: 0 kB' 'Active: 893692 kB' 'Inactive: 1387844 kB' 'Active(anon): 134344 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 125768 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145284 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75316 kB' 'KernelStack: 6416 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.516 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.516 13:25:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.777 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.777 13:25:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:00.777 13:25:17 -- setup/common.sh@33 -- # echo 0 00:15:00.777 13:25:17 -- setup/common.sh@33 -- # return 0 00:15:00.777 13:25:17 -- setup/hugepages.sh@100 -- # resv=0 00:15:00.777 nr_hugepages=1024 00:15:00.777 13:25:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:00.778 resv_hugepages=0 00:15:00.778 13:25:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:00.778 surplus_hugepages=0 00:15:00.778 13:25:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:00.778 anon_hugepages=0 00:15:00.778 13:25:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:00.778 13:25:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:00.778 13:25:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:00.778 13:25:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:00.778 13:25:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:00.778 13:25:17 -- setup/common.sh@18 -- # local node= 00:15:00.778 13:25:17 -- setup/common.sh@19 -- # local var val 00:15:00.778 13:25:17 -- setup/common.sh@20 -- # local mem_f mem 00:15:00.778 13:25:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:00.778 13:25:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:00.778 13:25:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:00.778 13:25:17 -- setup/common.sh@28 -- # mapfile -t mem 00:15:00.778 13:25:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548860 kB' 'MemAvailable: 9494096 kB' 'Buffers: 3456 kB' 'Cached: 2154212 kB' 'SwapCached: 0 kB' 'Active: 893976 kB' 'Inactive: 1387844 kB' 'Active(anon): 134628 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'AnonPages: 125804 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145280 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75312 kB' 'KernelStack: 6416 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.778 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.778 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:17 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:17 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:00.779 13:25:18 -- setup/common.sh@33 -- # echo 1024 00:15:00.779 13:25:18 -- setup/common.sh@33 -- # return 0 00:15:00.779 13:25:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:00.779 13:25:18 -- setup/hugepages.sh@112 -- # get_nodes 00:15:00.779 13:25:18 -- setup/hugepages.sh@27 -- # local node 00:15:00.779 13:25:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:00.779 13:25:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:00.779 13:25:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:00.779 13:25:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:00.779 13:25:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:00.779 13:25:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:00.779 13:25:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:00.779 13:25:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:00.779 13:25:18 -- setup/common.sh@18 -- # local node=0 00:15:00.779 13:25:18 -- setup/common.sh@19 -- # local var val 00:15:00.779 13:25:18 -- setup/common.sh@20 -- # local mem_f mem 00:15:00.779 13:25:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:00.779 13:25:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:00.779 13:25:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:00.779 13:25:18 -- setup/common.sh@28 -- # mapfile -t mem 00:15:00.779 13:25:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548860 kB' 'MemUsed: 4693112 kB' 'SwapCached: 0 kB' 'Active: 893996 kB' 'Inactive: 1387844 kB' 'Active(anon): 134648 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 700 kB' 'Writeback: 0 kB' 'FilePages: 2157668 kB' 'Mapped: 48876 kB' 'AnonPages: 125808 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69968 kB' 'Slab: 145280 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.779 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.779 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # continue 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:00.780 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:00.780 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:00.780 13:25:18 -- setup/common.sh@33 -- # echo 0 00:15:00.780 13:25:18 -- setup/common.sh@33 -- # return 0 00:15:00.780 13:25:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:00.780 13:25:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:00.780 13:25:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:00.780 13:25:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:00.780 node0=1024 expecting 1024 00:15:00.780 13:25:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:00.780 13:25:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:00.780 00:15:00.780 real 0m0.931s 00:15:00.780 user 0m0.430s 00:15:00.780 sys 0m0.452s 00:15:00.780 13:25:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:00.780 13:25:18 -- common/autotest_common.sh@10 -- # set +x 00:15:00.780 ************************************ 00:15:00.780 END TEST default_setup 00:15:00.780 ************************************ 00:15:00.780 13:25:18 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:15:00.780 13:25:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:00.780 13:25:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:00.780 13:25:18 -- common/autotest_common.sh@10 -- # set +x 00:15:00.780 ************************************ 00:15:00.780 START TEST per_node_1G_alloc 00:15:00.780 ************************************ 00:15:00.780 13:25:18 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:15:00.780 13:25:18 -- setup/hugepages.sh@143 -- # local IFS=, 00:15:00.780 13:25:18 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:15:00.780 13:25:18 -- setup/hugepages.sh@49 -- # local size=1048576 00:15:00.780 13:25:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:00.780 13:25:18 -- setup/hugepages.sh@51 -- # shift 00:15:00.780 13:25:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:15:00.780 13:25:18 -- setup/hugepages.sh@52 -- # local node_ids 00:15:00.780 13:25:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:00.780 13:25:18 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:15:00.780 13:25:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:00.780 13:25:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:15:00.780 13:25:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:00.780 13:25:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:00.780 13:25:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:00.780 13:25:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:00.780 13:25:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:00.780 13:25:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:00.780 13:25:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:00.780 13:25:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:15:00.780 13:25:18 -- setup/hugepages.sh@73 -- # return 0 00:15:00.780 13:25:18 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:15:00.780 13:25:18 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:15:00.780 13:25:18 -- setup/hugepages.sh@146 -- # setup output 00:15:00.780 13:25:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:00.780 13:25:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:01.039 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:01.302 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.302 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.302 13:25:18 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:15:01.302 13:25:18 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:15:01.302 13:25:18 -- setup/hugepages.sh@89 -- # local node 00:15:01.302 13:25:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:01.302 13:25:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:01.302 13:25:18 -- setup/hugepages.sh@92 -- # local surp 00:15:01.302 13:25:18 -- setup/hugepages.sh@93 -- # local resv 00:15:01.302 13:25:18 -- setup/hugepages.sh@94 -- # local anon 00:15:01.302 13:25:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:01.302 13:25:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:01.302 13:25:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:01.302 13:25:18 -- setup/common.sh@18 -- # local node= 00:15:01.302 13:25:18 -- setup/common.sh@19 -- # local var val 00:15:01.302 13:25:18 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.302 13:25:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.302 13:25:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.302 13:25:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.302 13:25:18 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.302 13:25:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8596492 kB' 'MemAvailable: 10541744 kB' 'Buffers: 3456 kB' 'Cached: 2154216 kB' 'SwapCached: 0 kB' 'Active: 894124 kB' 'Inactive: 1387860 kB' 'Active(anon): 134776 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 126116 kB' 'Mapped: 49004 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145320 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75352 kB' 'KernelStack: 6388 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.302 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.302 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.303 13:25:18 -- setup/common.sh@33 -- # echo 0 00:15:01.303 13:25:18 -- setup/common.sh@33 -- # return 0 00:15:01.303 13:25:18 -- setup/hugepages.sh@97 -- # anon=0 00:15:01.303 13:25:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:01.303 13:25:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:01.303 13:25:18 -- setup/common.sh@18 -- # local node= 00:15:01.303 13:25:18 -- setup/common.sh@19 -- # local var val 00:15:01.303 13:25:18 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.303 13:25:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.303 13:25:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.303 13:25:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.303 13:25:18 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.303 13:25:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8596492 kB' 'MemAvailable: 10541744 kB' 'Buffers: 3456 kB' 'Cached: 2154216 kB' 'SwapCached: 0 kB' 'Active: 893780 kB' 'Inactive: 1387860 kB' 'Active(anon): 134432 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 125824 kB' 'Mapped: 48884 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6416 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.303 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.303 13:25:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.304 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.304 13:25:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.305 13:25:18 -- setup/common.sh@33 -- # echo 0 00:15:01.305 13:25:18 -- setup/common.sh@33 -- # return 0 00:15:01.305 13:25:18 -- setup/hugepages.sh@99 -- # surp=0 00:15:01.305 13:25:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:01.305 13:25:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:01.305 13:25:18 -- setup/common.sh@18 -- # local node= 00:15:01.305 13:25:18 -- setup/common.sh@19 -- # local var val 00:15:01.305 13:25:18 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.305 13:25:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.305 13:25:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.305 13:25:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.305 13:25:18 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.305 13:25:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8596492 kB' 'MemAvailable: 10541744 kB' 'Buffers: 3456 kB' 'Cached: 2154216 kB' 'SwapCached: 0 kB' 'Active: 893868 kB' 'Inactive: 1387860 kB' 'Active(anon): 134520 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 125888 kB' 'Mapped: 48884 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6400 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.305 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.305 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.306 13:25:18 -- setup/common.sh@33 -- # echo 0 00:15:01.306 13:25:18 -- setup/common.sh@33 -- # return 0 00:15:01.306 13:25:18 -- setup/hugepages.sh@100 -- # resv=0 00:15:01.306 13:25:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:15:01.306 nr_hugepages=512 00:15:01.306 resv_hugepages=0 00:15:01.306 surplus_hugepages=0 00:15:01.306 anon_hugepages=0 00:15:01.306 13:25:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:01.306 13:25:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:01.306 13:25:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:01.306 13:25:18 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:01.306 13:25:18 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:15:01.306 13:25:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:01.306 13:25:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:01.306 13:25:18 -- setup/common.sh@18 -- # local node= 00:15:01.306 13:25:18 -- setup/common.sh@19 -- # local var val 00:15:01.306 13:25:18 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.306 13:25:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.306 13:25:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.306 13:25:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.306 13:25:18 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.306 13:25:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8596752 kB' 'MemAvailable: 10542004 kB' 'Buffers: 3456 kB' 'Cached: 2154216 kB' 'SwapCached: 0 kB' 'Active: 893844 kB' 'Inactive: 1387860 kB' 'Active(anon): 134496 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 125724 kB' 'Mapped: 48884 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6432 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 357996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.306 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.306 13:25:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.307 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.307 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:01.308 13:25:18 -- setup/common.sh@33 -- # echo 512 00:15:01.308 13:25:18 -- setup/common.sh@33 -- # return 0 00:15:01.308 13:25:18 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:01.308 13:25:18 -- setup/hugepages.sh@112 -- # get_nodes 00:15:01.308 13:25:18 -- setup/hugepages.sh@27 -- # local node 00:15:01.308 13:25:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:01.308 13:25:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:15:01.308 13:25:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:01.308 13:25:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:01.308 13:25:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:01.308 13:25:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:01.308 13:25:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:01.308 13:25:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:01.308 13:25:18 -- setup/common.sh@18 -- # local node=0 00:15:01.308 13:25:18 -- setup/common.sh@19 -- # local var val 00:15:01.308 13:25:18 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.308 13:25:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.308 13:25:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:01.308 13:25:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:01.308 13:25:18 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.308 13:25:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8596900 kB' 'MemUsed: 3645072 kB' 'SwapCached: 0 kB' 'Active: 893784 kB' 'Inactive: 1387860 kB' 'Active(anon): 134436 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'FilePages: 2157672 kB' 'Mapped: 48884 kB' 'AnonPages: 125612 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.308 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.308 13:25:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # continue 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.309 13:25:18 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.309 13:25:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.309 13:25:18 -- setup/common.sh@33 -- # echo 0 00:15:01.309 13:25:18 -- setup/common.sh@33 -- # return 0 00:15:01.309 13:25:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:01.309 13:25:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:01.309 13:25:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:01.309 13:25:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:01.309 13:25:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:15:01.309 node0=512 expecting 512 00:15:01.309 13:25:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:15:01.309 00:15:01.309 real 0m0.550s 00:15:01.309 user 0m0.260s 00:15:01.309 sys 0m0.298s 00:15:01.309 13:25:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:01.309 13:25:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.309 ************************************ 00:15:01.309 END TEST per_node_1G_alloc 00:15:01.309 ************************************ 00:15:01.309 13:25:18 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:15:01.309 13:25:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:01.309 13:25:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:01.309 13:25:18 -- common/autotest_common.sh@10 -- # set +x 00:15:01.568 ************************************ 00:15:01.568 START TEST even_2G_alloc 00:15:01.568 ************************************ 00:15:01.568 13:25:18 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:15:01.568 13:25:18 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:15:01.568 13:25:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:15:01.568 13:25:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:01.568 13:25:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:01.568 13:25:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:01.568 13:25:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:01.568 13:25:18 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:01.568 13:25:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:01.568 13:25:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:01.568 13:25:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:01.568 13:25:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:01.568 13:25:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:01.568 13:25:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:01.568 13:25:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:01.568 13:25:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:01.568 13:25:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:15:01.568 13:25:18 -- setup/hugepages.sh@83 -- # : 0 00:15:01.568 13:25:18 -- setup/hugepages.sh@84 -- # : 0 00:15:01.568 13:25:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:01.568 13:25:18 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:15:01.568 13:25:18 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:15:01.568 13:25:18 -- setup/hugepages.sh@153 -- # setup output 00:15:01.568 13:25:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:01.568 13:25:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:01.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:01.828 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.828 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:01.828 13:25:19 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:15:01.828 13:25:19 -- setup/hugepages.sh@89 -- # local node 00:15:01.828 13:25:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:01.828 13:25:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:01.828 13:25:19 -- setup/hugepages.sh@92 -- # local surp 00:15:01.828 13:25:19 -- setup/hugepages.sh@93 -- # local resv 00:15:01.828 13:25:19 -- setup/hugepages.sh@94 -- # local anon 00:15:01.828 13:25:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:01.828 13:25:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:01.828 13:25:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:01.828 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:01.828 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:01.828 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.829 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.829 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.829 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.829 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.829 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7547776 kB' 'MemAvailable: 9493024 kB' 'Buffers: 3456 kB' 'Cached: 2154212 kB' 'SwapCached: 0 kB' 'Active: 894128 kB' 'Inactive: 1387856 kB' 'Active(anon): 134780 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1048 kB' 'Writeback: 0 kB' 'AnonPages: 126156 kB' 'Mapped: 49120 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145308 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75340 kB' 'KernelStack: 6388 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.829 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.829 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:01.830 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:01.830 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:01.830 13:25:19 -- setup/hugepages.sh@97 -- # anon=0 00:15:01.830 13:25:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:01.830 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:01.830 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:01.830 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:01.830 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.830 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.830 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.830 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.830 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.830 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548028 kB' 'MemAvailable: 9493276 kB' 'Buffers: 3456 kB' 'Cached: 2154216 kB' 'SwapCached: 0 kB' 'Active: 893916 kB' 'Inactive: 1387856 kB' 'Active(anon): 134568 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1048 kB' 'Writeback: 0 kB' 'AnonPages: 125740 kB' 'Mapped: 49084 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145304 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75336 kB' 'KernelStack: 6384 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.830 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.830 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:01.831 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:01.831 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:01.831 13:25:19 -- setup/hugepages.sh@99 -- # surp=0 00:15:01.831 13:25:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:01.831 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:01.831 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:01.831 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:01.831 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.831 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.831 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.831 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.831 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.831 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548028 kB' 'MemAvailable: 9493276 kB' 'Buffers: 3456 kB' 'Cached: 2154216 kB' 'SwapCached: 0 kB' 'Active: 893832 kB' 'Inactive: 1387856 kB' 'Active(anon): 134484 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1048 kB' 'Writeback: 0 kB' 'AnonPages: 125916 kB' 'Mapped: 49092 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145304 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75336 kB' 'KernelStack: 6400 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.831 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.831 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # continue 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.832 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.832 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:01.832 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:01.833 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:01.833 13:25:19 -- setup/hugepages.sh@100 -- # resv=0 00:15:01.833 nr_hugepages=1024 00:15:01.833 13:25:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:01.833 resv_hugepages=0 00:15:01.833 13:25:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:01.833 surplus_hugepages=0 00:15:01.833 13:25:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:01.833 anon_hugepages=0 00:15:01.833 13:25:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:01.833 13:25:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:01.833 13:25:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:01.833 13:25:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:01.833 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:01.833 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:01.833 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:01.833 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:01.833 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:01.833 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:01.833 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:01.833 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:01.833 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:01.833 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:01.833 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:01.833 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548028 kB' 'MemAvailable: 9493276 kB' 'Buffers: 3456 kB' 'Cached: 2154216 kB' 'SwapCached: 0 kB' 'Active: 893752 kB' 'Inactive: 1387856 kB' 'Active(anon): 134404 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1048 kB' 'Writeback: 0 kB' 'AnonPages: 125844 kB' 'Mapped: 49092 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145304 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75336 kB' 'KernelStack: 6368 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 358360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.093 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.093 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.094 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.094 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.094 13:25:19 -- setup/common.sh@33 -- # echo 1024 00:15:02.094 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:02.094 13:25:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:02.094 13:25:19 -- setup/hugepages.sh@112 -- # get_nodes 00:15:02.094 13:25:19 -- setup/hugepages.sh@27 -- # local node 00:15:02.094 13:25:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:02.094 13:25:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:02.094 13:25:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:02.094 13:25:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:02.094 13:25:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:02.094 13:25:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:02.094 13:25:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:02.094 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:02.094 13:25:19 -- setup/common.sh@18 -- # local node=0 00:15:02.094 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:02.094 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:02.094 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:02.095 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:02.095 13:25:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:02.095 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:02.095 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7548028 kB' 'MemUsed: 4693944 kB' 'SwapCached: 0 kB' 'Active: 893740 kB' 'Inactive: 1387856 kB' 'Active(anon): 134392 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1048 kB' 'Writeback: 0 kB' 'FilePages: 2157672 kB' 'Mapped: 49092 kB' 'AnonPages: 125556 kB' 'Shmem: 10464 kB' 'KernelStack: 6420 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69968 kB' 'Slab: 145304 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.095 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.095 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.095 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:02.095 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:02.095 13:25:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:02.096 13:25:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:02.096 13:25:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:02.096 13:25:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:02.096 node0=1024 expecting 1024 00:15:02.096 13:25:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:02.096 13:25:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:02.096 00:15:02.096 real 0m0.512s 00:15:02.096 user 0m0.252s 00:15:02.096 sys 0m0.293s 00:15:02.096 13:25:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.096 13:25:19 -- common/autotest_common.sh@10 -- # set +x 00:15:02.096 ************************************ 00:15:02.096 END TEST even_2G_alloc 00:15:02.096 ************************************ 00:15:02.096 13:25:19 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:15:02.096 13:25:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:02.096 13:25:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.096 13:25:19 -- common/autotest_common.sh@10 -- # set +x 00:15:02.096 ************************************ 00:15:02.096 START TEST odd_alloc 00:15:02.096 ************************************ 00:15:02.096 13:25:19 -- common/autotest_common.sh@1111 -- # odd_alloc 00:15:02.096 13:25:19 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:15:02.096 13:25:19 -- setup/hugepages.sh@49 -- # local size=2098176 00:15:02.096 13:25:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:02.096 13:25:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:02.096 13:25:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:15:02.096 13:25:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:02.096 13:25:19 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:02.096 13:25:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:02.096 13:25:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:15:02.096 13:25:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:02.096 13:25:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:02.096 13:25:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:02.096 13:25:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:02.096 13:25:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:02.096 13:25:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:02.096 13:25:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:15:02.096 13:25:19 -- setup/hugepages.sh@83 -- # : 0 00:15:02.096 13:25:19 -- setup/hugepages.sh@84 -- # : 0 00:15:02.096 13:25:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:02.096 13:25:19 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:15:02.096 13:25:19 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:15:02.096 13:25:19 -- setup/hugepages.sh@160 -- # setup output 00:15:02.096 13:25:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:02.096 13:25:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:02.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:02.355 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:02.355 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:02.616 13:25:19 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:15:02.616 13:25:19 -- setup/hugepages.sh@89 -- # local node 00:15:02.616 13:25:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:02.616 13:25:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:02.616 13:25:19 -- setup/hugepages.sh@92 -- # local surp 00:15:02.616 13:25:19 -- setup/hugepages.sh@93 -- # local resv 00:15:02.616 13:25:19 -- setup/hugepages.sh@94 -- # local anon 00:15:02.616 13:25:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:02.616 13:25:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:02.616 13:25:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:02.616 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:02.616 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:02.616 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:02.616 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:02.616 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:02.616 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:02.616 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:02.616 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7541684 kB' 'MemAvailable: 9486940 kB' 'Buffers: 3456 kB' 'Cached: 2154220 kB' 'SwapCached: 0 kB' 'Active: 894360 kB' 'Inactive: 1387864 kB' 'Active(anon): 135012 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1212 kB' 'Writeback: 0 kB' 'AnonPages: 126416 kB' 'Mapped: 49128 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145316 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75348 kB' 'KernelStack: 6424 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.616 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.616 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:02.617 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:02.617 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:02.617 13:25:19 -- setup/hugepages.sh@97 -- # anon=0 00:15:02.617 13:25:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:02.617 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:02.617 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:02.617 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:02.617 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:02.617 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:02.617 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:02.617 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:02.617 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:02.617 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7541436 kB' 'MemAvailable: 9486692 kB' 'Buffers: 3456 kB' 'Cached: 2154220 kB' 'SwapCached: 0 kB' 'Active: 894060 kB' 'Inactive: 1387864 kB' 'Active(anon): 134712 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1216 kB' 'Writeback: 0 kB' 'AnonPages: 125892 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6416 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.617 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.617 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.618 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.618 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.619 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:02.619 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:02.619 13:25:19 -- setup/hugepages.sh@99 -- # surp=0 00:15:02.619 13:25:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:02.619 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:02.619 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:02.619 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:02.619 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:02.619 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:02.619 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:02.619 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:02.619 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:02.619 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7541436 kB' 'MemAvailable: 9486692 kB' 'Buffers: 3456 kB' 'Cached: 2154220 kB' 'SwapCached: 0 kB' 'Active: 893748 kB' 'Inactive: 1387864 kB' 'Active(anon): 134400 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1216 kB' 'Writeback: 0 kB' 'AnonPages: 125876 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6416 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.619 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.619 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:02.620 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:02.620 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:02.620 13:25:19 -- setup/hugepages.sh@100 -- # resv=0 00:15:02.620 nr_hugepages=1025 00:15:02.620 13:25:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:15:02.620 resv_hugepages=0 00:15:02.620 13:25:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:02.620 surplus_hugepages=0 00:15:02.620 13:25:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:02.620 anon_hugepages=0 00:15:02.620 13:25:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:02.620 13:25:19 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:15:02.620 13:25:19 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:15:02.620 13:25:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:02.620 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:02.620 13:25:19 -- setup/common.sh@18 -- # local node= 00:15:02.620 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:02.620 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:02.620 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:02.620 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:02.620 13:25:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:02.620 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:02.620 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7541436 kB' 'MemAvailable: 9486692 kB' 'Buffers: 3456 kB' 'Cached: 2154220 kB' 'SwapCached: 0 kB' 'Active: 893980 kB' 'Inactive: 1387864 kB' 'Active(anon): 134632 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1216 kB' 'Writeback: 0 kB' 'AnonPages: 125812 kB' 'Mapped: 48908 kB' 'Shmem: 10464 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'KernelStack: 6400 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.620 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.620 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.621 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.621 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:02.622 13:25:19 -- setup/common.sh@33 -- # echo 1025 00:15:02.622 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:02.622 13:25:19 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:15:02.622 13:25:19 -- setup/hugepages.sh@112 -- # get_nodes 00:15:02.622 13:25:19 -- setup/hugepages.sh@27 -- # local node 00:15:02.622 13:25:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:02.622 13:25:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:15:02.622 13:25:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:02.622 13:25:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:02.622 13:25:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:02.622 13:25:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:02.622 13:25:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:02.622 13:25:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:02.622 13:25:19 -- setup/common.sh@18 -- # local node=0 00:15:02.622 13:25:19 -- setup/common.sh@19 -- # local var val 00:15:02.622 13:25:19 -- setup/common.sh@20 -- # local mem_f mem 00:15:02.622 13:25:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:02.622 13:25:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:02.622 13:25:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:02.622 13:25:19 -- setup/common.sh@28 -- # mapfile -t mem 00:15:02.622 13:25:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542204 kB' 'MemUsed: 4699768 kB' 'SwapCached: 0 kB' 'Active: 893944 kB' 'Inactive: 1387864 kB' 'Active(anon): 134596 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387864 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1216 kB' 'Writeback: 0 kB' 'FilePages: 2157676 kB' 'Mapped: 48908 kB' 'AnonPages: 125800 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69968 kB' 'Slab: 145312 kB' 'SReclaimable: 69968 kB' 'SUnreclaim: 75344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.622 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.622 13:25:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # continue 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # IFS=': ' 00:15:02.623 13:25:19 -- setup/common.sh@31 -- # read -r var val _ 00:15:02.623 13:25:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:02.623 13:25:19 -- setup/common.sh@33 -- # echo 0 00:15:02.623 13:25:19 -- setup/common.sh@33 -- # return 0 00:15:02.623 13:25:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:02.623 13:25:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:02.623 13:25:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:02.623 13:25:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:02.623 node0=1025 expecting 1025 00:15:02.623 13:25:19 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:15:02.623 13:25:19 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:15:02.623 00:15:02.623 real 0m0.544s 00:15:02.623 user 0m0.278s 00:15:02.623 sys 0m0.298s 00:15:02.623 13:25:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.623 13:25:19 -- common/autotest_common.sh@10 -- # set +x 00:15:02.623 ************************************ 00:15:02.623 END TEST odd_alloc 00:15:02.623 ************************************ 00:15:02.623 13:25:20 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:15:02.623 13:25:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:02.623 13:25:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.623 13:25:20 -- common/autotest_common.sh@10 -- # set +x 00:15:02.881 ************************************ 00:15:02.881 START TEST custom_alloc 00:15:02.881 ************************************ 00:15:02.881 13:25:20 -- common/autotest_common.sh@1111 -- # custom_alloc 00:15:02.881 13:25:20 -- setup/hugepages.sh@167 -- # local IFS=, 00:15:02.881 13:25:20 -- setup/hugepages.sh@169 -- # local node 00:15:02.881 13:25:20 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:15:02.881 13:25:20 -- setup/hugepages.sh@170 -- # local nodes_hp 00:15:02.881 13:25:20 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:15:02.881 13:25:20 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:15:02.881 13:25:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:15:02.881 13:25:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:15:02.881 13:25:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:02.881 13:25:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:15:02.881 13:25:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:15:02.881 13:25:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:02.881 13:25:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:02.881 13:25:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:02.881 13:25:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:02.881 13:25:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:02.881 13:25:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:02.881 13:25:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:02.881 13:25:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:15:02.881 13:25:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:02.881 13:25:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:15:02.881 13:25:20 -- setup/hugepages.sh@83 -- # : 0 00:15:02.881 13:25:20 -- setup/hugepages.sh@84 -- # : 0 00:15:02.881 13:25:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:15:02.881 13:25:20 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:15:02.881 13:25:20 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:15:02.881 13:25:20 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:15:02.882 13:25:20 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:15:02.882 13:25:20 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:15:02.882 13:25:20 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:15:02.882 13:25:20 -- setup/hugepages.sh@62 -- # user_nodes=() 00:15:02.882 13:25:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:02.882 13:25:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:15:02.882 13:25:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:02.882 13:25:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:02.882 13:25:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:02.882 13:25:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:15:02.882 13:25:20 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:15:02.882 13:25:20 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:15:02.882 13:25:20 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:15:02.882 13:25:20 -- setup/hugepages.sh@78 -- # return 0 00:15:02.882 13:25:20 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:15:02.882 13:25:20 -- setup/hugepages.sh@187 -- # setup output 00:15:02.882 13:25:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:02.882 13:25:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:03.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:03.143 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:03.143 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:03.143 13:25:20 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:15:03.143 13:25:20 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:15:03.143 13:25:20 -- setup/hugepages.sh@89 -- # local node 00:15:03.143 13:25:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:03.143 13:25:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:03.143 13:25:20 -- setup/hugepages.sh@92 -- # local surp 00:15:03.143 13:25:20 -- setup/hugepages.sh@93 -- # local resv 00:15:03.143 13:25:20 -- setup/hugepages.sh@94 -- # local anon 00:15:03.143 13:25:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:03.143 13:25:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:03.143 13:25:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:03.143 13:25:20 -- setup/common.sh@18 -- # local node= 00:15:03.143 13:25:20 -- setup/common.sh@19 -- # local var val 00:15:03.143 13:25:20 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.143 13:25:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.143 13:25:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.143 13:25:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.143 13:25:20 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.143 13:25:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8594172 kB' 'MemAvailable: 10539472 kB' 'Buffers: 3456 kB' 'Cached: 2154256 kB' 'SwapCached: 0 kB' 'Active: 894340 kB' 'Inactive: 1387900 kB' 'Active(anon): 134992 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1344 kB' 'Writeback: 0 kB' 'AnonPages: 126408 kB' 'Mapped: 49084 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145292 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75308 kB' 'KernelStack: 6404 kB' 'PageTables: 4592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.143 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.143 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.144 13:25:20 -- setup/common.sh@33 -- # echo 0 00:15:03.144 13:25:20 -- setup/common.sh@33 -- # return 0 00:15:03.144 13:25:20 -- setup/hugepages.sh@97 -- # anon=0 00:15:03.144 13:25:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:03.144 13:25:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:03.144 13:25:20 -- setup/common.sh@18 -- # local node= 00:15:03.144 13:25:20 -- setup/common.sh@19 -- # local var val 00:15:03.144 13:25:20 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.144 13:25:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.144 13:25:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.144 13:25:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.144 13:25:20 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.144 13:25:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8594172 kB' 'MemAvailable: 10539472 kB' 'Buffers: 3456 kB' 'Cached: 2154256 kB' 'SwapCached: 0 kB' 'Active: 894472 kB' 'Inactive: 1387900 kB' 'Active(anon): 135124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1344 kB' 'Writeback: 0 kB' 'AnonPages: 126280 kB' 'Mapped: 49084 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145292 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75308 kB' 'KernelStack: 6388 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.144 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.144 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.145 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.145 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.146 13:25:20 -- setup/common.sh@33 -- # echo 0 00:15:03.146 13:25:20 -- setup/common.sh@33 -- # return 0 00:15:03.146 13:25:20 -- setup/hugepages.sh@99 -- # surp=0 00:15:03.146 13:25:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:03.146 13:25:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:03.146 13:25:20 -- setup/common.sh@18 -- # local node= 00:15:03.146 13:25:20 -- setup/common.sh@19 -- # local var val 00:15:03.146 13:25:20 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.146 13:25:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.146 13:25:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.146 13:25:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.146 13:25:20 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.146 13:25:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8594172 kB' 'MemAvailable: 10539472 kB' 'Buffers: 3456 kB' 'Cached: 2154256 kB' 'SwapCached: 0 kB' 'Active: 893976 kB' 'Inactive: 1387900 kB' 'Active(anon): 134628 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'AnonPages: 126012 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145296 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75312 kB' 'KernelStack: 6384 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.146 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.146 13:25:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.147 13:25:20 -- setup/common.sh@33 -- # echo 0 00:15:03.147 13:25:20 -- setup/common.sh@33 -- # return 0 00:15:03.147 13:25:20 -- setup/hugepages.sh@100 -- # resv=0 00:15:03.147 nr_hugepages=512 00:15:03.147 13:25:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:15:03.147 13:25:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:03.147 resv_hugepages=0 00:15:03.147 surplus_hugepages=0 00:15:03.147 13:25:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:03.147 anon_hugepages=0 00:15:03.147 13:25:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:03.147 13:25:20 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:03.147 13:25:20 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:15:03.147 13:25:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:03.147 13:25:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:03.147 13:25:20 -- setup/common.sh@18 -- # local node= 00:15:03.147 13:25:20 -- setup/common.sh@19 -- # local var val 00:15:03.147 13:25:20 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.147 13:25:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.147 13:25:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.147 13:25:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.147 13:25:20 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.147 13:25:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8593920 kB' 'MemAvailable: 10539220 kB' 'Buffers: 3456 kB' 'Cached: 2154256 kB' 'SwapCached: 0 kB' 'Active: 894204 kB' 'Inactive: 1387900 kB' 'Active(anon): 134856 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'AnonPages: 126024 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145296 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75312 kB' 'KernelStack: 6400 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 358492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.147 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.147 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.148 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.148 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.149 13:25:20 -- setup/common.sh@33 -- # echo 512 00:15:03.149 13:25:20 -- setup/common.sh@33 -- # return 0 00:15:03.149 13:25:20 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:15:03.149 13:25:20 -- setup/hugepages.sh@112 -- # get_nodes 00:15:03.149 13:25:20 -- setup/hugepages.sh@27 -- # local node 00:15:03.149 13:25:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:03.149 13:25:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:15:03.149 13:25:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:03.149 13:25:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:03.149 13:25:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:03.149 13:25:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:03.149 13:25:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:03.149 13:25:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:03.149 13:25:20 -- setup/common.sh@18 -- # local node=0 00:15:03.149 13:25:20 -- setup/common.sh@19 -- # local var val 00:15:03.149 13:25:20 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.149 13:25:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.149 13:25:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:03.149 13:25:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:03.149 13:25:20 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.149 13:25:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8593920 kB' 'MemUsed: 3648052 kB' 'SwapCached: 0 kB' 'Active: 893828 kB' 'Inactive: 1387900 kB' 'Active(anon): 134480 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1348 kB' 'Writeback: 0 kB' 'FilePages: 2157712 kB' 'Mapped: 48916 kB' 'AnonPages: 125840 kB' 'Shmem: 10464 kB' 'KernelStack: 6416 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69984 kB' 'Slab: 145296 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.149 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.149 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.409 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.409 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.410 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.410 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.410 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.410 13:25:20 -- setup/common.sh@32 -- # continue 00:15:03.410 13:25:20 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.410 13:25:20 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.410 13:25:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.410 13:25:20 -- setup/common.sh@33 -- # echo 0 00:15:03.410 13:25:20 -- setup/common.sh@33 -- # return 0 00:15:03.410 13:25:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:03.410 13:25:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:03.410 13:25:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:03.410 13:25:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:03.410 node0=512 expecting 512 00:15:03.410 13:25:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:15:03.410 13:25:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:15:03.410 00:15:03.410 real 0m0.506s 00:15:03.410 user 0m0.253s 00:15:03.410 sys 0m0.286s 00:15:03.410 13:25:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:03.410 13:25:20 -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 ************************************ 00:15:03.410 END TEST custom_alloc 00:15:03.410 ************************************ 00:15:03.410 13:25:20 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:15:03.410 13:25:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:03.410 13:25:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:03.410 13:25:20 -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 ************************************ 00:15:03.410 START TEST no_shrink_alloc 00:15:03.410 ************************************ 00:15:03.410 13:25:20 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:15:03.410 13:25:20 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:15:03.410 13:25:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:15:03.410 13:25:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:15:03.410 13:25:20 -- setup/hugepages.sh@51 -- # shift 00:15:03.410 13:25:20 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:15:03.410 13:25:20 -- setup/hugepages.sh@52 -- # local node_ids 00:15:03.410 13:25:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:15:03.410 13:25:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:15:03.410 13:25:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:15:03.410 13:25:20 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:15:03.410 13:25:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:15:03.410 13:25:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:15:03.410 13:25:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:15:03.410 13:25:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:15:03.410 13:25:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:15:03.410 13:25:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:15:03.410 13:25:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:15:03.410 13:25:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:15:03.410 13:25:20 -- setup/hugepages.sh@73 -- # return 0 00:15:03.410 13:25:20 -- setup/hugepages.sh@198 -- # setup output 00:15:03.410 13:25:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:03.410 13:25:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:03.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:03.669 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:03.669 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:03.669 13:25:21 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:15:03.669 13:25:21 -- setup/hugepages.sh@89 -- # local node 00:15:03.669 13:25:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:03.669 13:25:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:03.669 13:25:21 -- setup/hugepages.sh@92 -- # local surp 00:15:03.669 13:25:21 -- setup/hugepages.sh@93 -- # local resv 00:15:03.669 13:25:21 -- setup/hugepages.sh@94 -- # local anon 00:15:03.669 13:25:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:03.669 13:25:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:03.669 13:25:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:03.669 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:03.669 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:03.669 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.669 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.669 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.669 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.669 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.669 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542352 kB' 'MemAvailable: 9487660 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 893416 kB' 'Inactive: 1387908 kB' 'Active(anon): 134068 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1504 kB' 'Writeback: 0 kB' 'AnonPages: 125172 kB' 'Mapped: 49264 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145296 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75312 kB' 'KernelStack: 6356 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.669 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.669 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.670 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.670 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:03.933 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:03.933 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:03.933 13:25:21 -- setup/hugepages.sh@97 -- # anon=0 00:15:03.933 13:25:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:03.933 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:03.933 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:03.933 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:03.933 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.933 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.933 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.933 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.933 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.933 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542352 kB' 'MemAvailable: 9487660 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 892956 kB' 'Inactive: 1387908 kB' 'Active(anon): 133608 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'AnonPages: 124980 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145292 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75308 kB' 'KernelStack: 6400 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.933 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.933 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.934 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.934 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.935 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:03.935 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:03.935 13:25:21 -- setup/hugepages.sh@99 -- # surp=0 00:15:03.935 13:25:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:03.935 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:03.935 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:03.935 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:03.935 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.935 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.935 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.935 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.935 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.935 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542352 kB' 'MemAvailable: 9487660 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 892968 kB' 'Inactive: 1387908 kB' 'Active(anon): 133620 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'AnonPages: 125016 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145292 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75308 kB' 'KernelStack: 6400 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.935 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.935 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.936 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:03.936 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:03.936 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:03.936 nr_hugepages=1024 00:15:03.936 resv_hugepages=0 00:15:03.936 13:25:21 -- setup/hugepages.sh@100 -- # resv=0 00:15:03.936 13:25:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:03.936 13:25:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:03.936 surplus_hugepages=0 00:15:03.936 13:25:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:03.936 anon_hugepages=0 00:15:03.936 13:25:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:03.936 13:25:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:03.936 13:25:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:03.936 13:25:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:03.936 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:03.936 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:03.936 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:03.936 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.936 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.936 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:03.936 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:03.936 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.936 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.936 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542352 kB' 'MemAvailable: 9487660 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 892992 kB' 'Inactive: 1387908 kB' 'Active(anon): 133644 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'AnonPages: 125056 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 69984 kB' 'Slab: 145292 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75308 kB' 'KernelStack: 6400 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 359552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.937 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.937 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:03.938 13:25:21 -- setup/common.sh@33 -- # echo 1024 00:15:03.938 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:03.938 13:25:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:03.938 13:25:21 -- setup/hugepages.sh@112 -- # get_nodes 00:15:03.938 13:25:21 -- setup/hugepages.sh@27 -- # local node 00:15:03.938 13:25:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:03.938 13:25:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:03.938 13:25:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:03.938 13:25:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:03.938 13:25:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:03.938 13:25:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:03.938 13:25:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:03.938 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:03.938 13:25:21 -- setup/common.sh@18 -- # local node=0 00:15:03.938 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:03.938 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:03.938 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:03.938 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:03.938 13:25:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:03.938 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:03.938 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7542676 kB' 'MemUsed: 4699296 kB' 'SwapCached: 0 kB' 'Active: 892824 kB' 'Inactive: 1387908 kB' 'Active(anon): 133476 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1508 kB' 'Writeback: 0 kB' 'FilePages: 2157720 kB' 'Mapped: 48932 kB' 'AnonPages: 124916 kB' 'Shmem: 10464 kB' 'KernelStack: 6384 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69984 kB' 'Slab: 145292 kB' 'SReclaimable: 69984 kB' 'SUnreclaim: 75308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.938 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.938 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # continue 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:03.939 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:03.939 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:03.939 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:03.939 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:03.939 node0=1024 expecting 1024 00:15:03.939 13:25:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:03.939 13:25:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:03.939 13:25:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:03.939 13:25:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:03.939 13:25:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:03.939 13:25:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:03.939 13:25:21 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:15:03.939 13:25:21 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:15:03.939 13:25:21 -- setup/hugepages.sh@202 -- # setup output 00:15:03.939 13:25:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:03.939 13:25:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:04.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:04.199 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:04.199 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:04.199 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:15:04.199 13:25:21 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:15:04.199 13:25:21 -- setup/hugepages.sh@89 -- # local node 00:15:04.199 13:25:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:15:04.199 13:25:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:15:04.199 13:25:21 -- setup/hugepages.sh@92 -- # local surp 00:15:04.199 13:25:21 -- setup/hugepages.sh@93 -- # local resv 00:15:04.199 13:25:21 -- setup/hugepages.sh@94 -- # local anon 00:15:04.199 13:25:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:15:04.199 13:25:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:15:04.199 13:25:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:15:04.199 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:04.199 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:04.199 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:04.199 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:04.199 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:04.199 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:04.199 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:04.199 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7546348 kB' 'MemAvailable: 9491656 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 888956 kB' 'Inactive: 1387908 kB' 'Active(anon): 129608 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1516 kB' 'Writeback: 0 kB' 'AnonPages: 121000 kB' 'Mapped: 48324 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 145136 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 75156 kB' 'KernelStack: 6292 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 340828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.199 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.199 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:15:04.461 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:04.461 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:04.461 13:25:21 -- setup/hugepages.sh@97 -- # anon=0 00:15:04.461 13:25:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:15:04.461 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:04.461 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:04.461 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:04.461 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:04.461 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:04.461 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:04.461 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:04.461 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:04.461 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7546348 kB' 'MemAvailable: 9491656 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 888700 kB' 'Inactive: 1387908 kB' 'Active(anon): 129352 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1516 kB' 'Writeback: 0 kB' 'AnonPages: 120760 kB' 'Mapped: 48260 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 145132 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 75152 kB' 'KernelStack: 6260 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 340828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.461 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.461 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.462 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.462 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.463 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:04.463 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:04.463 13:25:21 -- setup/hugepages.sh@99 -- # surp=0 00:15:04.463 13:25:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:15:04.463 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:15:04.463 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:04.463 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:04.463 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:04.463 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:04.463 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:04.463 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:04.463 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:04.463 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7546348 kB' 'MemAvailable: 9491656 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 888424 kB' 'Inactive: 1387908 kB' 'Active(anon): 129076 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1516 kB' 'Writeback: 0 kB' 'AnonPages: 120468 kB' 'Mapped: 48384 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 145132 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 75152 kB' 'KernelStack: 6244 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 340828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.463 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.463 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:15:04.464 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:04.464 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:04.464 nr_hugepages=1024 00:15:04.464 13:25:21 -- setup/hugepages.sh@100 -- # resv=0 00:15:04.464 13:25:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:15:04.464 resv_hugepages=0 00:15:04.464 13:25:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:15:04.464 surplus_hugepages=0 00:15:04.464 13:25:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:15:04.464 anon_hugepages=0 00:15:04.464 13:25:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:15:04.464 13:25:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:04.464 13:25:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:15:04.464 13:25:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:15:04.464 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:15:04.464 13:25:21 -- setup/common.sh@18 -- # local node= 00:15:04.464 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:04.464 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:04.464 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:04.464 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:15:04.464 13:25:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:15:04.464 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:04.464 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7546100 kB' 'MemAvailable: 9491408 kB' 'Buffers: 3456 kB' 'Cached: 2154264 kB' 'SwapCached: 0 kB' 'Active: 888432 kB' 'Inactive: 1387908 kB' 'Active(anon): 129084 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1516 kB' 'Writeback: 0 kB' 'AnonPages: 120500 kB' 'Mapped: 48392 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 145096 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 75116 kB' 'KernelStack: 6272 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 340828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 169836 kB' 'DirectMap2M: 6121472 kB' 'DirectMap1G: 8388608 kB' 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.464 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.464 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.465 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.465 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:15:04.466 13:25:21 -- setup/common.sh@33 -- # echo 1024 00:15:04.466 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:04.466 13:25:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:15:04.466 13:25:21 -- setup/hugepages.sh@112 -- # get_nodes 00:15:04.466 13:25:21 -- setup/hugepages.sh@27 -- # local node 00:15:04.466 13:25:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:15:04.466 13:25:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:15:04.466 13:25:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:15:04.466 13:25:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:15:04.466 13:25:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:15:04.466 13:25:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:15:04.466 13:25:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:15:04.466 13:25:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:15:04.466 13:25:21 -- setup/common.sh@18 -- # local node=0 00:15:04.466 13:25:21 -- setup/common.sh@19 -- # local var val 00:15:04.466 13:25:21 -- setup/common.sh@20 -- # local mem_f mem 00:15:04.466 13:25:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:15:04.466 13:25:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:15:04.466 13:25:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:15:04.466 13:25:21 -- setup/common.sh@28 -- # mapfile -t mem 00:15:04.466 13:25:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7546100 kB' 'MemUsed: 4695872 kB' 'SwapCached: 0 kB' 'Active: 888396 kB' 'Inactive: 1387908 kB' 'Active(anon): 129048 kB' 'Inactive(anon): 0 kB' 'Active(file): 759348 kB' 'Inactive(file): 1387908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1516 kB' 'Writeback: 0 kB' 'FilePages: 2157720 kB' 'Mapped: 48192 kB' 'AnonPages: 120236 kB' 'Shmem: 10464 kB' 'KernelStack: 6272 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69980 kB' 'Slab: 145092 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 75112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.466 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.466 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # continue 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # IFS=': ' 00:15:04.467 13:25:21 -- setup/common.sh@31 -- # read -r var val _ 00:15:04.467 13:25:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:15:04.467 13:25:21 -- setup/common.sh@33 -- # echo 0 00:15:04.467 13:25:21 -- setup/common.sh@33 -- # return 0 00:15:04.467 13:25:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:15:04.467 13:25:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:15:04.467 13:25:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:15:04.467 13:25:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:15:04.467 node0=1024 expecting 1024 00:15:04.467 13:25:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:15:04.467 ************************************ 00:15:04.467 END TEST no_shrink_alloc 00:15:04.467 ************************************ 00:15:04.467 13:25:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:15:04.467 00:15:04.467 real 0m1.094s 00:15:04.467 user 0m0.540s 00:15:04.467 sys 0m0.574s 00:15:04.467 13:25:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.467 13:25:21 -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 13:25:21 -- setup/hugepages.sh@217 -- # clear_hp 00:15:04.467 13:25:21 -- setup/hugepages.sh@37 -- # local node hp 00:15:04.467 13:25:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:15:04.467 13:25:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:04.467 13:25:21 -- setup/hugepages.sh@41 -- # echo 0 00:15:04.467 13:25:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:15:04.467 13:25:21 -- setup/hugepages.sh@41 -- # echo 0 00:15:04.467 13:25:21 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:15:04.467 13:25:21 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:15:04.467 ************************************ 00:15:04.467 END TEST hugepages 00:15:04.467 ************************************ 00:15:04.467 00:15:04.467 real 0m4.969s 00:15:04.467 user 0m2.312s 00:15:04.467 sys 0m2.648s 00:15:04.467 13:25:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.467 13:25:21 -- common/autotest_common.sh@10 -- # set +x 00:15:04.759 13:25:21 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:15:04.759 13:25:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:04.759 13:25:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:04.759 13:25:21 -- common/autotest_common.sh@10 -- # set +x 00:15:04.759 ************************************ 00:15:04.759 START TEST driver 00:15:04.759 ************************************ 00:15:04.759 13:25:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:15:04.759 * Looking for test storage... 00:15:04.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:04.759 13:25:22 -- setup/driver.sh@68 -- # setup reset 00:15:04.759 13:25:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:04.759 13:25:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:05.339 13:25:22 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:15:05.339 13:25:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:05.339 13:25:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:05.339 13:25:22 -- common/autotest_common.sh@10 -- # set +x 00:15:05.339 ************************************ 00:15:05.339 START TEST guess_driver 00:15:05.339 ************************************ 00:15:05.339 13:25:22 -- common/autotest_common.sh@1111 -- # guess_driver 00:15:05.339 13:25:22 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:15:05.339 13:25:22 -- setup/driver.sh@47 -- # local fail=0 00:15:05.339 13:25:22 -- setup/driver.sh@49 -- # pick_driver 00:15:05.339 13:25:22 -- setup/driver.sh@36 -- # vfio 00:15:05.339 13:25:22 -- setup/driver.sh@21 -- # local iommu_grups 00:15:05.339 13:25:22 -- setup/driver.sh@22 -- # local unsafe_vfio 00:15:05.339 13:25:22 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:15:05.339 13:25:22 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:15:05.339 13:25:22 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:15:05.339 13:25:22 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:15:05.339 13:25:22 -- setup/driver.sh@32 -- # return 1 00:15:05.339 13:25:22 -- setup/driver.sh@38 -- # uio 00:15:05.339 13:25:22 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:15:05.339 13:25:22 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:15:05.339 13:25:22 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:15:05.339 13:25:22 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:15:05.339 13:25:22 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:15:05.339 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:15:05.339 13:25:22 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:15:05.339 Looking for driver=uio_pci_generic 00:15:05.339 13:25:22 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:15:05.339 13:25:22 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:15:05.339 13:25:22 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:15:05.339 13:25:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:05.339 13:25:22 -- setup/driver.sh@45 -- # setup output config 00:15:05.339 13:25:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:05.339 13:25:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:05.906 13:25:23 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:15:05.906 13:25:23 -- setup/driver.sh@58 -- # continue 00:15:05.906 13:25:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:06.165 13:25:23 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:06.165 13:25:23 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:06.165 13:25:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:06.165 13:25:23 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:15:06.165 13:25:23 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:15:06.165 13:25:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:15:06.165 13:25:23 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:15:06.165 13:25:23 -- setup/driver.sh@65 -- # setup reset 00:15:06.165 13:25:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:06.165 13:25:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:06.735 00:15:06.735 real 0m1.416s 00:15:06.735 user 0m0.532s 00:15:06.735 sys 0m0.884s 00:15:06.735 ************************************ 00:15:06.735 END TEST guess_driver 00:15:06.735 ************************************ 00:15:06.735 13:25:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.735 13:25:24 -- common/autotest_common.sh@10 -- # set +x 00:15:06.735 ************************************ 00:15:06.735 END TEST driver 00:15:06.735 ************************************ 00:15:06.735 00:15:06.735 real 0m2.179s 00:15:06.735 user 0m0.783s 00:15:06.735 sys 0m1.434s 00:15:06.735 13:25:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.735 13:25:24 -- common/autotest_common.sh@10 -- # set +x 00:15:06.994 13:25:24 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:06.994 13:25:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:06.994 13:25:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.994 13:25:24 -- common/autotest_common.sh@10 -- # set +x 00:15:06.994 ************************************ 00:15:06.994 START TEST devices 00:15:06.994 ************************************ 00:15:06.994 13:25:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:15:06.994 * Looking for test storage... 00:15:06.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:15:06.994 13:25:24 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:15:06.994 13:25:24 -- setup/devices.sh@192 -- # setup reset 00:15:06.994 13:25:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:15:06.994 13:25:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:07.930 13:25:25 -- setup/devices.sh@194 -- # get_zoned_devs 00:15:07.930 13:25:25 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:07.930 13:25:25 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:07.930 13:25:25 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:07.930 13:25:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:07.930 13:25:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:07.930 13:25:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:07.930 13:25:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:07.930 13:25:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:07.930 13:25:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:07.930 13:25:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:15:07.930 13:25:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:15:07.930 13:25:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:07.930 13:25:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:07.930 13:25:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:07.930 13:25:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:15:07.930 13:25:25 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:15:07.930 13:25:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:07.930 13:25:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:07.930 13:25:25 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:07.930 13:25:25 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:07.930 13:25:25 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:07.930 13:25:25 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:07.930 13:25:25 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:07.930 13:25:25 -- setup/devices.sh@196 -- # blocks=() 00:15:07.930 13:25:25 -- setup/devices.sh@196 -- # declare -a blocks 00:15:07.930 13:25:25 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:15:07.930 13:25:25 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:15:07.930 13:25:25 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:15:07.930 13:25:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:07.930 13:25:25 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:15:07.930 13:25:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:15:07.930 13:25:25 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:15:07.930 13:25:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:15:07.930 No valid GPT data, bailing 00:15:07.930 13:25:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:07.930 13:25:25 -- scripts/common.sh@391 -- # pt= 00:15:07.930 13:25:25 -- scripts/common.sh@392 -- # return 1 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:15:07.930 13:25:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:07.930 13:25:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:07.930 13:25:25 -- setup/common.sh@80 -- # echo 4294967296 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:07.930 13:25:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:07.930 13:25:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:15:07.930 13:25:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:07.930 13:25:25 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:15:07.930 13:25:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:15:07.930 13:25:25 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:15:07.930 13:25:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:15:07.930 No valid GPT data, bailing 00:15:07.930 13:25:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:15:07.930 13:25:25 -- scripts/common.sh@391 -- # pt= 00:15:07.930 13:25:25 -- scripts/common.sh@392 -- # return 1 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:15:07.930 13:25:25 -- setup/common.sh@76 -- # local dev=nvme0n2 00:15:07.930 13:25:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:15:07.930 13:25:25 -- setup/common.sh@80 -- # echo 4294967296 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:07.930 13:25:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:07.930 13:25:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:15:07.930 13:25:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme0 00:15:07.930 13:25:25 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:15:07.930 13:25:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:15:07.930 13:25:25 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:15:07.930 13:25:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:15:07.930 No valid GPT data, bailing 00:15:07.930 13:25:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:15:07.930 13:25:25 -- scripts/common.sh@391 -- # pt= 00:15:07.930 13:25:25 -- scripts/common.sh@392 -- # return 1 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:15:07.930 13:25:25 -- setup/common.sh@76 -- # local dev=nvme0n3 00:15:07.930 13:25:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:15:07.930 13:25:25 -- setup/common.sh@80 -- # echo 4294967296 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:15:07.930 13:25:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:07.930 13:25:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:15:07.930 13:25:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:15:07.930 13:25:25 -- setup/devices.sh@201 -- # ctrl=nvme1 00:15:07.930 13:25:25 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:15:07.930 13:25:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:15:07.930 13:25:25 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:15:07.930 13:25:25 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:15:07.930 13:25:25 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:15:07.930 No valid GPT data, bailing 00:15:08.190 13:25:25 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:15:08.190 13:25:25 -- scripts/common.sh@391 -- # pt= 00:15:08.190 13:25:25 -- scripts/common.sh@392 -- # return 1 00:15:08.190 13:25:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:15:08.190 13:25:25 -- setup/common.sh@76 -- # local dev=nvme1n1 00:15:08.190 13:25:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:15:08.190 13:25:25 -- setup/common.sh@80 -- # echo 5368709120 00:15:08.190 13:25:25 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:15:08.190 13:25:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:15:08.190 13:25:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:15:08.190 13:25:25 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:15:08.190 13:25:25 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:15:08.190 13:25:25 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:15:08.190 13:25:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:08.190 13:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.190 13:25:25 -- common/autotest_common.sh@10 -- # set +x 00:15:08.190 ************************************ 00:15:08.190 START TEST nvme_mount 00:15:08.190 ************************************ 00:15:08.190 13:25:25 -- common/autotest_common.sh@1111 -- # nvme_mount 00:15:08.190 13:25:25 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:15:08.190 13:25:25 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:15:08.190 13:25:25 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:08.190 13:25:25 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:08.190 13:25:25 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:15:08.190 13:25:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:08.190 13:25:25 -- setup/common.sh@40 -- # local part_no=1 00:15:08.190 13:25:25 -- setup/common.sh@41 -- # local size=1073741824 00:15:08.190 13:25:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:08.190 13:25:25 -- setup/common.sh@44 -- # parts=() 00:15:08.190 13:25:25 -- setup/common.sh@44 -- # local parts 00:15:08.190 13:25:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:15:08.190 13:25:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:08.190 13:25:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:08.190 13:25:25 -- setup/common.sh@46 -- # (( part++ )) 00:15:08.190 13:25:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:08.190 13:25:25 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:08.190 13:25:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:08.190 13:25:25 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:15:09.125 Creating new GPT entries in memory. 00:15:09.125 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:09.125 other utilities. 00:15:09.125 13:25:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:15:09.125 13:25:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:09.125 13:25:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:09.125 13:25:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:09.125 13:25:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:10.499 Creating new GPT entries in memory. 00:15:10.499 The operation has completed successfully. 00:15:10.499 13:25:27 -- setup/common.sh@57 -- # (( part++ )) 00:15:10.499 13:25:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:10.499 13:25:27 -- setup/common.sh@62 -- # wait 58358 00:15:10.499 13:25:27 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:10.499 13:25:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:15:10.499 13:25:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:10.499 13:25:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:15:10.499 13:25:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:15:10.499 13:25:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:10.499 13:25:27 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:10.499 13:25:27 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:10.499 13:25:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:15:10.499 13:25:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:10.499 13:25:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:10.499 13:25:27 -- setup/devices.sh@53 -- # local found=0 00:15:10.499 13:25:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:10.499 13:25:27 -- setup/devices.sh@56 -- # : 00:15:10.499 13:25:27 -- setup/devices.sh@59 -- # local pci status 00:15:10.499 13:25:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:10.499 13:25:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:10.499 13:25:27 -- setup/devices.sh@47 -- # setup output config 00:15:10.499 13:25:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:10.499 13:25:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:10.499 13:25:27 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:10.499 13:25:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:15:10.499 13:25:27 -- setup/devices.sh@63 -- # found=1 00:15:10.499 13:25:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:10.499 13:25:27 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:10.499 13:25:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:10.756 13:25:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:10.756 13:25:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:10.756 13:25:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:10.756 13:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:10.756 13:25:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:10.756 13:25:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:10.757 13:25:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:10.757 13:25:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:10.757 13:25:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:10.757 13:25:28 -- setup/devices.sh@110 -- # cleanup_nvme 00:15:10.757 13:25:28 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:10.757 13:25:28 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:10.757 13:25:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:10.757 13:25:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:15:10.757 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:10.757 13:25:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:10.757 13:25:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:11.015 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:11.015 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:11.015 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:11.015 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:11.015 13:25:28 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:15:11.015 13:25:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:15:11.015 13:25:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:11.015 13:25:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:15:11.015 13:25:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:15:11.015 13:25:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:11.015 13:25:28 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:11.015 13:25:28 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:11.015 13:25:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:15:11.015 13:25:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:11.015 13:25:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:11.015 13:25:28 -- setup/devices.sh@53 -- # local found=0 00:15:11.015 13:25:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:11.015 13:25:28 -- setup/devices.sh@56 -- # : 00:15:11.015 13:25:28 -- setup/devices.sh@59 -- # local pci status 00:15:11.015 13:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:11.015 13:25:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:11.015 13:25:28 -- setup/devices.sh@47 -- # setup output config 00:15:11.015 13:25:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:11.015 13:25:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:11.273 13:25:28 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.273 13:25:28 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:15:11.273 13:25:28 -- setup/devices.sh@63 -- # found=1 00:15:11.273 13:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:11.273 13:25:28 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.273 13:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:11.532 13:25:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.532 13:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:11.532 13:25:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.532 13:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:11.532 13:25:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:11.532 13:25:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:15:11.532 13:25:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:11.532 13:25:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:15:11.532 13:25:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:15:11.532 13:25:28 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:11.532 13:25:28 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:15:11.532 13:25:28 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:11.532 13:25:28 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:15:11.532 13:25:28 -- setup/devices.sh@50 -- # local mount_point= 00:15:11.532 13:25:28 -- setup/devices.sh@51 -- # local test_file= 00:15:11.532 13:25:28 -- setup/devices.sh@53 -- # local found=0 00:15:11.532 13:25:28 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:15:11.532 13:25:28 -- setup/devices.sh@59 -- # local pci status 00:15:11.532 13:25:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:11.532 13:25:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:11.532 13:25:28 -- setup/devices.sh@47 -- # setup output config 00:15:11.532 13:25:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:11.532 13:25:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:11.789 13:25:29 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.789 13:25:29 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:15:11.789 13:25:29 -- setup/devices.sh@63 -- # found=1 00:15:11.790 13:25:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:11.790 13:25:29 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:11.790 13:25:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:12.047 13:25:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:12.047 13:25:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:12.047 13:25:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:12.047 13:25:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:12.304 13:25:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:12.304 13:25:29 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:15:12.304 13:25:29 -- setup/devices.sh@68 -- # return 0 00:15:12.304 13:25:29 -- setup/devices.sh@128 -- # cleanup_nvme 00:15:12.304 13:25:29 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:12.304 13:25:29 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:12.304 13:25:29 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:12.304 13:25:29 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:12.304 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:12.304 00:15:12.304 real 0m4.034s 00:15:12.304 user 0m0.669s 00:15:12.304 sys 0m1.085s 00:15:12.304 13:25:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:12.304 13:25:29 -- common/autotest_common.sh@10 -- # set +x 00:15:12.304 ************************************ 00:15:12.304 END TEST nvme_mount 00:15:12.304 ************************************ 00:15:12.304 13:25:29 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:15:12.304 13:25:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:12.304 13:25:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.304 13:25:29 -- common/autotest_common.sh@10 -- # set +x 00:15:12.304 ************************************ 00:15:12.304 START TEST dm_mount 00:15:12.304 ************************************ 00:15:12.305 13:25:29 -- common/autotest_common.sh@1111 -- # dm_mount 00:15:12.305 13:25:29 -- setup/devices.sh@144 -- # pv=nvme0n1 00:15:12.305 13:25:29 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:15:12.305 13:25:29 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:15:12.305 13:25:29 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:15:12.305 13:25:29 -- setup/common.sh@39 -- # local disk=nvme0n1 00:15:12.305 13:25:29 -- setup/common.sh@40 -- # local part_no=2 00:15:12.305 13:25:29 -- setup/common.sh@41 -- # local size=1073741824 00:15:12.305 13:25:29 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:15:12.305 13:25:29 -- setup/common.sh@44 -- # parts=() 00:15:12.305 13:25:29 -- setup/common.sh@44 -- # local parts 00:15:12.305 13:25:29 -- setup/common.sh@46 -- # (( part = 1 )) 00:15:12.305 13:25:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:12.305 13:25:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:12.305 13:25:29 -- setup/common.sh@46 -- # (( part++ )) 00:15:12.305 13:25:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:12.305 13:25:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:15:12.305 13:25:29 -- setup/common.sh@46 -- # (( part++ )) 00:15:12.305 13:25:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:15:12.305 13:25:29 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:15:12.305 13:25:29 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:15:12.305 13:25:29 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:15:13.238 Creating new GPT entries in memory. 00:15:13.238 GPT data structures destroyed! You may now partition the disk using fdisk or 00:15:13.238 other utilities. 00:15:13.238 13:25:30 -- setup/common.sh@57 -- # (( part = 1 )) 00:15:13.238 13:25:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:13.238 13:25:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:13.238 13:25:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:13.238 13:25:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:15:14.616 Creating new GPT entries in memory. 00:15:14.616 The operation has completed successfully. 00:15:14.616 13:25:31 -- setup/common.sh@57 -- # (( part++ )) 00:15:14.616 13:25:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:14.616 13:25:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:15:14.616 13:25:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:15:14.616 13:25:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:15:15.552 The operation has completed successfully. 00:15:15.552 13:25:32 -- setup/common.sh@57 -- # (( part++ )) 00:15:15.552 13:25:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:15:15.552 13:25:32 -- setup/common.sh@62 -- # wait 58797 00:15:15.552 13:25:32 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:15:15.552 13:25:32 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:15.552 13:25:32 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:15.552 13:25:32 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:15:15.552 13:25:32 -- setup/devices.sh@160 -- # for t in {1..5} 00:15:15.552 13:25:32 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:15.552 13:25:32 -- setup/devices.sh@161 -- # break 00:15:15.552 13:25:32 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:15.552 13:25:32 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:15:15.552 13:25:32 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:15:15.552 13:25:32 -- setup/devices.sh@166 -- # dm=dm-0 00:15:15.552 13:25:32 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:15:15.552 13:25:32 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:15:15.552 13:25:32 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:15.552 13:25:32 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:15:15.552 13:25:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:15.552 13:25:32 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:15:15.552 13:25:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:15:15.552 13:25:32 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:15.552 13:25:32 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:15.552 13:25:32 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:15.552 13:25:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:15:15.552 13:25:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:15.552 13:25:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:15.552 13:25:32 -- setup/devices.sh@53 -- # local found=0 00:15:15.552 13:25:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:15:15.552 13:25:32 -- setup/devices.sh@56 -- # : 00:15:15.552 13:25:32 -- setup/devices.sh@59 -- # local pci status 00:15:15.552 13:25:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:15.552 13:25:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:15.552 13:25:32 -- setup/devices.sh@47 -- # setup output config 00:15:15.552 13:25:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:15.552 13:25:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:15.811 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:15.811 13:25:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:15:15.811 13:25:33 -- setup/devices.sh@63 -- # found=1 00:15:15.811 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:15.811 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:15.811 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:15.811 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:15.811 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:15.811 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:15.811 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:16.069 13:25:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:16.069 13:25:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:15:16.069 13:25:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:16.069 13:25:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:15:16.069 13:25:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:15:16.069 13:25:33 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:16.069 13:25:33 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:15:16.069 13:25:33 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:15:16.069 13:25:33 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:15:16.069 13:25:33 -- setup/devices.sh@50 -- # local mount_point= 00:15:16.069 13:25:33 -- setup/devices.sh@51 -- # local test_file= 00:15:16.069 13:25:33 -- setup/devices.sh@53 -- # local found=0 00:15:16.069 13:25:33 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:15:16.069 13:25:33 -- setup/devices.sh@59 -- # local pci status 00:15:16.069 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:16.069 13:25:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:15:16.069 13:25:33 -- setup/devices.sh@47 -- # setup output config 00:15:16.069 13:25:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:15:16.069 13:25:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:15:16.069 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.069 13:25:33 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:15:16.069 13:25:33 -- setup/devices.sh@63 -- # found=1 00:15:16.069 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:16.069 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.069 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:16.326 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.326 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:16.326 13:25:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:16.326 13:25:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:15:16.584 13:25:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:15:16.584 13:25:33 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:15:16.584 13:25:33 -- setup/devices.sh@68 -- # return 0 00:15:16.584 13:25:33 -- setup/devices.sh@187 -- # cleanup_dm 00:15:16.584 13:25:33 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:16.584 13:25:33 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:15:16.584 13:25:33 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:15:16.584 13:25:33 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:16.584 13:25:33 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:15:16.584 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:15:16.584 13:25:33 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:15:16.584 13:25:33 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:15:16.584 00:15:16.584 real 0m4.249s 00:15:16.584 user 0m0.467s 00:15:16.584 sys 0m0.737s 00:15:16.584 13:25:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.584 13:25:33 -- common/autotest_common.sh@10 -- # set +x 00:15:16.584 ************************************ 00:15:16.584 END TEST dm_mount 00:15:16.585 ************************************ 00:15:16.585 13:25:33 -- setup/devices.sh@1 -- # cleanup 00:15:16.585 13:25:33 -- setup/devices.sh@11 -- # cleanup_nvme 00:15:16.585 13:25:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:15:16.585 13:25:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:16.585 13:25:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:15:16.585 13:25:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:15:16.585 13:25:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:15:16.842 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:15:16.842 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:15:16.842 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:15:16.842 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:15:16.842 13:25:34 -- setup/devices.sh@12 -- # cleanup_dm 00:15:16.842 13:25:34 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:15:16.842 13:25:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:15:16.842 13:25:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:15:16.843 13:25:34 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:15:16.843 13:25:34 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:15:16.843 13:25:34 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:15:16.843 00:15:16.843 real 0m9.936s 00:15:16.843 user 0m1.815s 00:15:16.843 sys 0m2.486s 00:15:16.843 13:25:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.843 ************************************ 00:15:16.843 END TEST devices 00:15:16.843 ************************************ 00:15:16.843 13:25:34 -- common/autotest_common.sh@10 -- # set +x 00:15:16.843 00:15:16.843 real 0m22.572s 00:15:16.843 user 0m7.255s 00:15:16.843 sys 0m9.563s 00:15:16.843 13:25:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:16.843 13:25:34 -- common/autotest_common.sh@10 -- # set +x 00:15:16.843 ************************************ 00:15:16.843 END TEST setup.sh 00:15:16.843 ************************************ 00:15:17.100 13:25:34 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:15:17.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:17.667 Hugepages 00:15:17.668 node hugesize free / total 00:15:17.668 node0 1048576kB 0 / 0 00:15:17.668 node0 2048kB 2048 / 2048 00:15:17.668 00:15:17.668 Type BDF Vendor Device NUMA Driver Device Block devices 00:15:17.668 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:15:17.926 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:15:17.926 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:15:17.926 13:25:35 -- spdk/autotest.sh@130 -- # uname -s 00:15:17.926 13:25:35 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:15:17.926 13:25:35 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:15:17.926 13:25:35 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:18.797 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:18.797 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:18.797 13:25:36 -- common/autotest_common.sh@1518 -- # sleep 1 00:15:19.733 13:25:37 -- common/autotest_common.sh@1519 -- # bdfs=() 00:15:19.733 13:25:37 -- common/autotest_common.sh@1519 -- # local bdfs 00:15:19.733 13:25:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:15:19.733 13:25:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:15:19.733 13:25:37 -- common/autotest_common.sh@1499 -- # bdfs=() 00:15:19.733 13:25:37 -- common/autotest_common.sh@1499 -- # local bdfs 00:15:19.733 13:25:37 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:19.733 13:25:37 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:19.733 13:25:37 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:15:19.733 13:25:37 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:15:19.733 13:25:37 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:15:19.733 13:25:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:20.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.298 Waiting for block devices as requested 00:15:20.298 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:20.298 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:20.298 13:25:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:20.298 13:25:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:15:20.298 13:25:37 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:15:20.298 13:25:37 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:15:20.298 13:25:37 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:20.298 13:25:37 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:15:20.298 13:25:37 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:15:20.298 13:25:37 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:15:20.298 13:25:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:15:20.298 13:25:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:15:20.298 13:25:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:15:20.298 13:25:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:20.298 13:25:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:20.298 13:25:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:20.298 13:25:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:20.298 13:25:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:20.298 13:25:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:15:20.298 13:25:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:20.298 13:25:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:20.555 13:25:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:20.555 13:25:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:20.555 13:25:37 -- common/autotest_common.sh@1543 -- # continue 00:15:20.555 13:25:37 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:15:20.555 13:25:37 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:15:20.555 13:25:37 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:15:20.555 13:25:37 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:15:20.556 13:25:37 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:20.556 13:25:37 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:15:20.556 13:25:37 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:15:20.556 13:25:37 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:15:20.556 13:25:37 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:15:20.556 13:25:37 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:15:20.556 13:25:37 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:15:20.556 13:25:37 -- common/autotest_common.sh@1531 -- # grep oacs 00:15:20.556 13:25:37 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:15:20.556 13:25:37 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:15:20.556 13:25:37 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:15:20.556 13:25:37 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:15:20.556 13:25:37 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:15:20.556 13:25:37 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:15:20.556 13:25:37 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:15:20.556 13:25:37 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:15:20.556 13:25:37 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:15:20.556 13:25:37 -- common/autotest_common.sh@1543 -- # continue 00:15:20.556 13:25:37 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:15:20.556 13:25:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:20.556 13:25:37 -- common/autotest_common.sh@10 -- # set +x 00:15:20.556 13:25:37 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:15:20.556 13:25:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:20.556 13:25:37 -- common/autotest_common.sh@10 -- # set +x 00:15:20.556 13:25:37 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:21.122 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:21.381 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:21.381 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:21.381 13:25:38 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:15:21.381 13:25:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:21.381 13:25:38 -- common/autotest_common.sh@10 -- # set +x 00:15:21.381 13:25:38 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:15:21.381 13:25:38 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:15:21.381 13:25:38 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:15:21.381 13:25:38 -- common/autotest_common.sh@1563 -- # bdfs=() 00:15:21.381 13:25:38 -- common/autotest_common.sh@1563 -- # local bdfs 00:15:21.381 13:25:38 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:15:21.381 13:25:38 -- common/autotest_common.sh@1499 -- # bdfs=() 00:15:21.381 13:25:38 -- common/autotest_common.sh@1499 -- # local bdfs 00:15:21.381 13:25:38 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:21.381 13:25:38 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:21.381 13:25:38 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:15:21.381 13:25:38 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:15:21.381 13:25:38 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:15:21.381 13:25:38 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:21.381 13:25:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:15:21.381 13:25:38 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:21.381 13:25:38 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:21.381 13:25:38 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:15:21.381 13:25:38 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:15:21.665 13:25:38 -- common/autotest_common.sh@1566 -- # device=0x0010 00:15:21.665 13:25:38 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:15:21.665 13:25:38 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:15:21.665 13:25:38 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:15:21.665 13:25:38 -- common/autotest_common.sh@1579 -- # return 0 00:15:21.665 13:25:38 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:15:21.665 13:25:38 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:15:21.665 13:25:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:21.665 13:25:38 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:15:21.665 13:25:38 -- spdk/autotest.sh@162 -- # timing_enter lib 00:15:21.666 13:25:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:21.666 13:25:38 -- common/autotest_common.sh@10 -- # set +x 00:15:21.666 13:25:38 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:21.666 13:25:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:21.666 13:25:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.666 13:25:38 -- common/autotest_common.sh@10 -- # set +x 00:15:21.666 ************************************ 00:15:21.666 START TEST env 00:15:21.666 ************************************ 00:15:21.666 13:25:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:15:21.666 * Looking for test storage... 00:15:21.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:15:21.666 13:25:39 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:21.666 13:25:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:21.666 13:25:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.666 13:25:39 -- common/autotest_common.sh@10 -- # set +x 00:15:21.666 ************************************ 00:15:21.666 START TEST env_memory 00:15:21.666 ************************************ 00:15:21.666 13:25:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:15:21.925 00:15:21.925 00:15:21.926 CUnit - A unit testing framework for C - Version 2.1-3 00:15:21.926 http://cunit.sourceforge.net/ 00:15:21.926 00:15:21.926 00:15:21.926 Suite: memory 00:15:21.926 Test: alloc and free memory map ...[2024-04-26 13:25:39.139727] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:15:21.926 passed 00:15:21.926 Test: mem map translation ...[2024-04-26 13:25:39.171480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:15:21.926 [2024-04-26 13:25:39.171528] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:15:21.926 [2024-04-26 13:25:39.171585] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:15:21.926 [2024-04-26 13:25:39.171597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:15:21.926 passed 00:15:21.926 Test: mem map registration ...[2024-04-26 13:25:39.237999] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:15:21.926 [2024-04-26 13:25:39.238061] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:15:21.926 passed 00:15:21.926 Test: mem map adjacent registrations ...passed 00:15:21.926 00:15:21.926 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.926 suites 1 1 n/a 0 0 00:15:21.926 tests 4 4 4 0 0 00:15:21.926 asserts 152 152 152 0 n/a 00:15:21.926 00:15:21.926 Elapsed time = 0.220 seconds 00:15:21.926 00:15:21.926 real 0m0.239s 00:15:21.926 user 0m0.216s 00:15:21.926 sys 0m0.018s 00:15:21.926 13:25:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:21.926 ************************************ 00:15:21.926 END TEST env_memory 00:15:21.926 ************************************ 00:15:21.926 13:25:39 -- common/autotest_common.sh@10 -- # set +x 00:15:21.926 13:25:39 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:21.926 13:25:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:21.926 13:25:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.926 13:25:39 -- common/autotest_common.sh@10 -- # set +x 00:15:22.185 ************************************ 00:15:22.185 START TEST env_vtophys 00:15:22.185 ************************************ 00:15:22.185 13:25:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:15:22.185 EAL: lib.eal log level changed from notice to debug 00:15:22.185 EAL: Detected lcore 0 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 1 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 2 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 3 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 4 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 5 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 6 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 7 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 8 as core 0 on socket 0 00:15:22.185 EAL: Detected lcore 9 as core 0 on socket 0 00:15:22.185 EAL: Maximum logical cores by configuration: 128 00:15:22.185 EAL: Detected CPU lcores: 10 00:15:22.185 EAL: Detected NUMA nodes: 1 00:15:22.185 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:15:22.185 EAL: Detected shared linkage of DPDK 00:15:22.185 EAL: No shared files mode enabled, IPC will be disabled 00:15:22.185 EAL: Selected IOVA mode 'PA' 00:15:22.185 EAL: Probing VFIO support... 00:15:22.185 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:22.185 EAL: VFIO modules not loaded, skipping VFIO support... 00:15:22.185 EAL: Ask a virtual area of 0x2e000 bytes 00:15:22.185 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:15:22.185 EAL: Setting up physically contiguous memory... 00:15:22.185 EAL: Setting maximum number of open files to 524288 00:15:22.185 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:15:22.185 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:15:22.185 EAL: Ask a virtual area of 0x61000 bytes 00:15:22.185 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:15:22.185 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:22.185 EAL: Ask a virtual area of 0x400000000 bytes 00:15:22.185 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:15:22.186 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:15:22.186 EAL: Ask a virtual area of 0x61000 bytes 00:15:22.186 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:15:22.186 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:22.186 EAL: Ask a virtual area of 0x400000000 bytes 00:15:22.186 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:15:22.186 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:15:22.186 EAL: Ask a virtual area of 0x61000 bytes 00:15:22.186 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:15:22.186 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:22.186 EAL: Ask a virtual area of 0x400000000 bytes 00:15:22.186 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:15:22.186 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:15:22.186 EAL: Ask a virtual area of 0x61000 bytes 00:15:22.186 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:15:22.186 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:15:22.186 EAL: Ask a virtual area of 0x400000000 bytes 00:15:22.186 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:15:22.186 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:15:22.186 EAL: Hugepages will be freed exactly as allocated. 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: TSC frequency is ~2200000 KHz 00:15:22.186 EAL: Main lcore 0 is ready (tid=7f684d722a00;cpuset=[0]) 00:15:22.186 EAL: Trying to obtain current memory policy. 00:15:22.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.186 EAL: Restoring previous memory policy: 0 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was expanded by 2MB 00:15:22.186 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:15:22.186 EAL: No PCI address specified using 'addr=' in: bus=pci 00:15:22.186 EAL: Mem event callback 'spdk:(nil)' registered 00:15:22.186 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:15:22.186 00:15:22.186 00:15:22.186 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.186 http://cunit.sourceforge.net/ 00:15:22.186 00:15:22.186 00:15:22.186 Suite: components_suite 00:15:22.186 Test: vtophys_malloc_test ...passed 00:15:22.186 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:15:22.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.186 EAL: Restoring previous memory policy: 4 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was expanded by 4MB 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was shrunk by 4MB 00:15:22.186 EAL: Trying to obtain current memory policy. 00:15:22.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.186 EAL: Restoring previous memory policy: 4 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was expanded by 6MB 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was shrunk by 6MB 00:15:22.186 EAL: Trying to obtain current memory policy. 00:15:22.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.186 EAL: Restoring previous memory policy: 4 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was expanded by 10MB 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was shrunk by 10MB 00:15:22.186 EAL: Trying to obtain current memory policy. 00:15:22.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.186 EAL: Restoring previous memory policy: 4 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was expanded by 18MB 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was shrunk by 18MB 00:15:22.186 EAL: Trying to obtain current memory policy. 00:15:22.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.186 EAL: Restoring previous memory policy: 4 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was expanded by 34MB 00:15:22.186 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.186 EAL: request: mp_malloc_sync 00:15:22.186 EAL: No shared files mode enabled, IPC is disabled 00:15:22.186 EAL: Heap on socket 0 was shrunk by 34MB 00:15:22.186 EAL: Trying to obtain current memory policy. 00:15:22.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.445 EAL: Restoring previous memory policy: 4 00:15:22.445 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.445 EAL: request: mp_malloc_sync 00:15:22.445 EAL: No shared files mode enabled, IPC is disabled 00:15:22.446 EAL: Heap on socket 0 was expanded by 66MB 00:15:22.446 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.446 EAL: request: mp_malloc_sync 00:15:22.446 EAL: No shared files mode enabled, IPC is disabled 00:15:22.446 EAL: Heap on socket 0 was shrunk by 66MB 00:15:22.446 EAL: Trying to obtain current memory policy. 00:15:22.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.446 EAL: Restoring previous memory policy: 4 00:15:22.446 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.446 EAL: request: mp_malloc_sync 00:15:22.446 EAL: No shared files mode enabled, IPC is disabled 00:15:22.446 EAL: Heap on socket 0 was expanded by 130MB 00:15:22.446 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.446 EAL: request: mp_malloc_sync 00:15:22.446 EAL: No shared files mode enabled, IPC is disabled 00:15:22.446 EAL: Heap on socket 0 was shrunk by 130MB 00:15:22.446 EAL: Trying to obtain current memory policy. 00:15:22.446 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.446 EAL: Restoring previous memory policy: 4 00:15:22.446 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.446 EAL: request: mp_malloc_sync 00:15:22.446 EAL: No shared files mode enabled, IPC is disabled 00:15:22.446 EAL: Heap on socket 0 was expanded by 258MB 00:15:22.446 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.705 EAL: request: mp_malloc_sync 00:15:22.705 EAL: No shared files mode enabled, IPC is disabled 00:15:22.705 EAL: Heap on socket 0 was shrunk by 258MB 00:15:22.705 EAL: Trying to obtain current memory policy. 00:15:22.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:22.705 EAL: Restoring previous memory policy: 4 00:15:22.705 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.705 EAL: request: mp_malloc_sync 00:15:22.705 EAL: No shared files mode enabled, IPC is disabled 00:15:22.705 EAL: Heap on socket 0 was expanded by 514MB 00:15:22.964 EAL: Calling mem event callback 'spdk:(nil)' 00:15:22.964 EAL: request: mp_malloc_sync 00:15:22.964 EAL: No shared files mode enabled, IPC is disabled 00:15:22.964 EAL: Heap on socket 0 was shrunk by 514MB 00:15:22.964 EAL: Trying to obtain current memory policy. 00:15:22.964 EAL: Setting policy MPOL_PREFERRED for socket 0 00:15:23.222 EAL: Restoring previous memory policy: 4 00:15:23.222 EAL: Calling mem event callback 'spdk:(nil)' 00:15:23.222 EAL: request: mp_malloc_sync 00:15:23.222 EAL: No shared files mode enabled, IPC is disabled 00:15:23.222 EAL: Heap on socket 0 was expanded by 1026MB 00:15:23.481 EAL: Calling mem event callback 'spdk:(nil)' 00:15:23.481 passed 00:15:23.481 00:15:23.481 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.481 suites 1 1 n/a 0 0 00:15:23.481 tests 2 2 2 0 0 00:15:23.481 asserts 5148 5148 5148 0 n/a 00:15:23.481 00:15:23.481 Elapsed time = 1.284 seconds 00:15:23.481 EAL: request: mp_malloc_sync 00:15:23.481 EAL: No shared files mode enabled, IPC is disabled 00:15:23.481 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:23.481 EAL: Calling mem event callback 'spdk:(nil)' 00:15:23.740 EAL: request: mp_malloc_sync 00:15:23.740 EAL: No shared files mode enabled, IPC is disabled 00:15:23.740 EAL: Heap on socket 0 was shrunk by 2MB 00:15:23.740 EAL: No shared files mode enabled, IPC is disabled 00:15:23.740 EAL: No shared files mode enabled, IPC is disabled 00:15:23.740 EAL: No shared files mode enabled, IPC is disabled 00:15:23.740 00:15:23.740 real 0m1.491s 00:15:23.740 user 0m0.812s 00:15:23.740 sys 0m0.537s 00:15:23.740 13:25:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:23.740 ************************************ 00:15:23.740 END TEST env_vtophys 00:15:23.740 13:25:40 -- common/autotest_common.sh@10 -- # set +x 00:15:23.740 ************************************ 00:15:23.740 13:25:40 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:23.740 13:25:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:23.740 13:25:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.740 13:25:40 -- common/autotest_common.sh@10 -- # set +x 00:15:23.740 ************************************ 00:15:23.740 START TEST env_pci 00:15:23.740 ************************************ 00:15:23.740 13:25:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:23.740 00:15:23.740 00:15:23.740 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.740 http://cunit.sourceforge.net/ 00:15:23.740 00:15:23.740 00:15:23.740 Suite: pci 00:15:23.740 Test: pci_hook ...[2024-04-26 13:25:41.075526] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60007 has claimed it 00:15:23.740 passed 00:15:23.740 00:15:23.740 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.740 suites 1 1 n/a 0 0 00:15:23.740 tests 1 1 1 0 0 00:15:23.740 asserts 25 25 25 0 n/a 00:15:23.740 00:15:23.740 Elapsed time = 0.002 seconds 00:15:23.740 EAL: Cannot find device (10000:00:01.0) 00:15:23.740 EAL: Failed to attach device on primary process 00:15:23.740 00:15:23.740 real 0m0.020s 00:15:23.740 user 0m0.008s 00:15:23.740 sys 0m0.011s 00:15:23.740 13:25:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:23.740 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:23.740 ************************************ 00:15:23.740 END TEST env_pci 00:15:23.740 ************************************ 00:15:23.740 13:25:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:23.740 13:25:41 -- env/env.sh@15 -- # uname 00:15:23.740 13:25:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:23.740 13:25:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:23.740 13:25:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:23.740 13:25:41 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:23.740 13:25:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.740 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:24.000 ************************************ 00:15:24.000 START TEST env_dpdk_post_init 00:15:24.000 ************************************ 00:15:24.000 13:25:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:24.000 EAL: Detected CPU lcores: 10 00:15:24.000 EAL: Detected NUMA nodes: 1 00:15:24.000 EAL: Detected shared linkage of DPDK 00:15:24.000 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:24.000 EAL: Selected IOVA mode 'PA' 00:15:24.000 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:24.000 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:24.000 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:24.000 Starting DPDK initialization... 00:15:24.000 Starting SPDK post initialization... 00:15:24.000 SPDK NVMe probe 00:15:24.000 Attaching to 0000:00:10.0 00:15:24.000 Attaching to 0000:00:11.0 00:15:24.000 Attached to 0000:00:10.0 00:15:24.000 Attached to 0000:00:11.0 00:15:24.000 Cleaning up... 00:15:24.000 00:15:24.000 real 0m0.183s 00:15:24.000 user 0m0.046s 00:15:24.000 sys 0m0.037s 00:15:24.000 13:25:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:24.000 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:24.000 ************************************ 00:15:24.000 END TEST env_dpdk_post_init 00:15:24.000 ************************************ 00:15:24.000 13:25:41 -- env/env.sh@26 -- # uname 00:15:24.000 13:25:41 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:24.000 13:25:41 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:24.000 13:25:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:24.000 13:25:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.000 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:24.259 ************************************ 00:15:24.259 START TEST env_mem_callbacks 00:15:24.259 ************************************ 00:15:24.259 13:25:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:24.259 EAL: Detected CPU lcores: 10 00:15:24.259 EAL: Detected NUMA nodes: 1 00:15:24.259 EAL: Detected shared linkage of DPDK 00:15:24.259 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:24.259 EAL: Selected IOVA mode 'PA' 00:15:24.259 00:15:24.259 00:15:24.259 CUnit - A unit testing framework for C - Version 2.1-3 00:15:24.259 http://cunit.sourceforge.net/ 00:15:24.259 00:15:24.259 00:15:24.259 Suite: memory 00:15:24.259 Test: test ... 00:15:24.259 register 0x200000200000 2097152 00:15:24.259 malloc 3145728 00:15:24.259 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:24.259 register 0x200000400000 4194304 00:15:24.259 buf 0x200000500000 len 3145728 PASSED 00:15:24.259 malloc 64 00:15:24.259 buf 0x2000004fff40 len 64 PASSED 00:15:24.259 malloc 4194304 00:15:24.259 register 0x200000800000 6291456 00:15:24.259 buf 0x200000a00000 len 4194304 PASSED 00:15:24.259 free 0x200000500000 3145728 00:15:24.259 free 0x2000004fff40 64 00:15:24.259 unregister 0x200000400000 4194304 PASSED 00:15:24.259 free 0x200000a00000 4194304 00:15:24.259 unregister 0x200000800000 6291456 PASSED 00:15:24.259 malloc 8388608 00:15:24.259 register 0x200000400000 10485760 00:15:24.259 buf 0x200000600000 len 8388608 PASSED 00:15:24.259 free 0x200000600000 8388608 00:15:24.259 unregister 0x200000400000 10485760 PASSED 00:15:24.259 passed 00:15:24.259 00:15:24.259 Run Summary: Type Total Ran Passed Failed Inactive 00:15:24.259 suites 1 1 n/a 0 0 00:15:24.259 tests 1 1 1 0 0 00:15:24.259 asserts 15 15 15 0 n/a 00:15:24.259 00:15:24.259 Elapsed time = 0.007 seconds 00:15:24.259 00:15:24.259 real 0m0.145s 00:15:24.259 user 0m0.015s 00:15:24.259 sys 0m0.028s 00:15:24.259 13:25:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:24.259 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:24.259 ************************************ 00:15:24.259 END TEST env_mem_callbacks 00:15:24.259 ************************************ 00:15:24.259 00:15:24.259 real 0m2.765s 00:15:24.259 user 0m1.314s 00:15:24.259 sys 0m1.017s 00:15:24.259 13:25:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:24.259 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:24.259 ************************************ 00:15:24.259 END TEST env 00:15:24.259 ************************************ 00:15:24.518 13:25:41 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:24.518 13:25:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:24.518 13:25:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:24.518 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:24.518 ************************************ 00:15:24.518 START TEST rpc 00:15:24.518 ************************************ 00:15:24.518 13:25:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:24.518 * Looking for test storage... 00:15:24.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:24.518 13:25:41 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:24.518 13:25:41 -- rpc/rpc.sh@65 -- # spdk_pid=60135 00:15:24.518 13:25:41 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:24.518 13:25:41 -- rpc/rpc.sh@67 -- # waitforlisten 60135 00:15:24.518 13:25:41 -- common/autotest_common.sh@817 -- # '[' -z 60135 ']' 00:15:24.518 13:25:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.518 13:25:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:24.518 13:25:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.518 13:25:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:24.518 13:25:41 -- common/autotest_common.sh@10 -- # set +x 00:15:24.777 [2024-04-26 13:25:41.975259] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:24.777 [2024-04-26 13:25:41.975403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60135 ] 00:15:24.777 [2024-04-26 13:25:42.122597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.047 [2024-04-26 13:25:42.256989] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:25.047 [2024-04-26 13:25:42.257052] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60135' to capture a snapshot of events at runtime. 00:15:25.047 [2024-04-26 13:25:42.257067] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.047 [2024-04-26 13:25:42.257078] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.047 [2024-04-26 13:25:42.257087] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60135 for offline analysis/debug. 00:15:25.047 [2024-04-26 13:25:42.257124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.614 13:25:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:25.614 13:25:43 -- common/autotest_common.sh@850 -- # return 0 00:15:25.614 13:25:43 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:25.614 13:25:43 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:25.615 13:25:43 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:25.615 13:25:43 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:25.615 13:25:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:25.615 13:25:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:25.615 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.874 ************************************ 00:15:25.874 START TEST rpc_integrity 00:15:25.874 ************************************ 00:15:25.874 13:25:43 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:15:25.874 13:25:43 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:25.874 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.874 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.874 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.874 13:25:43 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:25.874 13:25:43 -- rpc/rpc.sh@13 -- # jq length 00:15:25.874 13:25:43 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:25.874 13:25:43 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:25.874 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.874 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.874 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.874 13:25:43 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:25.874 13:25:43 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:25.874 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.874 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.874 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.874 13:25:43 -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:25.874 { 00:15:25.874 "aliases": [ 00:15:25.874 "cae25a6e-33bb-4876-aeec-d4a80070132f" 00:15:25.874 ], 00:15:25.874 "assigned_rate_limits": { 00:15:25.874 "r_mbytes_per_sec": 0, 00:15:25.874 "rw_ios_per_sec": 0, 00:15:25.874 "rw_mbytes_per_sec": 0, 00:15:25.874 "w_mbytes_per_sec": 0 00:15:25.874 }, 00:15:25.874 "block_size": 512, 00:15:25.874 "claimed": false, 00:15:25.874 "driver_specific": {}, 00:15:25.874 "memory_domains": [ 00:15:25.874 { 00:15:25.874 "dma_device_id": "system", 00:15:25.874 "dma_device_type": 1 00:15:25.874 }, 00:15:25.874 { 00:15:25.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.874 "dma_device_type": 2 00:15:25.874 } 00:15:25.874 ], 00:15:25.874 "name": "Malloc0", 00:15:25.874 "num_blocks": 16384, 00:15:25.874 "product_name": "Malloc disk", 00:15:25.874 "supported_io_types": { 00:15:25.874 "abort": true, 00:15:25.874 "compare": false, 00:15:25.874 "compare_and_write": false, 00:15:25.874 "flush": true, 00:15:25.874 "nvme_admin": false, 00:15:25.874 "nvme_io": false, 00:15:25.874 "read": true, 00:15:25.874 "reset": true, 00:15:25.874 "unmap": true, 00:15:25.874 "write": true, 00:15:25.874 "write_zeroes": true 00:15:25.874 }, 00:15:25.874 "uuid": "cae25a6e-33bb-4876-aeec-d4a80070132f", 00:15:25.874 "zoned": false 00:15:25.874 } 00:15:25.874 ]' 00:15:25.874 13:25:43 -- rpc/rpc.sh@17 -- # jq length 00:15:25.874 13:25:43 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:25.874 13:25:43 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:25.874 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.874 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.874 [2024-04-26 13:25:43.290869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:25.874 [2024-04-26 13:25:43.290934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.874 [2024-04-26 13:25:43.290957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fec7f0 00:15:25.874 [2024-04-26 13:25:43.290967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.874 [2024-04-26 13:25:43.292985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.874 [2024-04-26 13:25:43.293021] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:25.874 Passthru0 00:15:25.874 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:25.874 13:25:43 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:25.874 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:25.874 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.132 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.132 13:25:43 -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:26.132 { 00:15:26.132 "aliases": [ 00:15:26.132 "cae25a6e-33bb-4876-aeec-d4a80070132f" 00:15:26.132 ], 00:15:26.132 "assigned_rate_limits": { 00:15:26.132 "r_mbytes_per_sec": 0, 00:15:26.132 "rw_ios_per_sec": 0, 00:15:26.132 "rw_mbytes_per_sec": 0, 00:15:26.132 "w_mbytes_per_sec": 0 00:15:26.132 }, 00:15:26.132 "block_size": 512, 00:15:26.132 "claim_type": "exclusive_write", 00:15:26.132 "claimed": true, 00:15:26.132 "driver_specific": {}, 00:15:26.132 "memory_domains": [ 00:15:26.132 { 00:15:26.132 "dma_device_id": "system", 00:15:26.132 "dma_device_type": 1 00:15:26.132 }, 00:15:26.132 { 00:15:26.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.132 "dma_device_type": 2 00:15:26.132 } 00:15:26.132 ], 00:15:26.132 "name": "Malloc0", 00:15:26.132 "num_blocks": 16384, 00:15:26.132 "product_name": "Malloc disk", 00:15:26.132 "supported_io_types": { 00:15:26.132 "abort": true, 00:15:26.133 "compare": false, 00:15:26.133 "compare_and_write": false, 00:15:26.133 "flush": true, 00:15:26.133 "nvme_admin": false, 00:15:26.133 "nvme_io": false, 00:15:26.133 "read": true, 00:15:26.133 "reset": true, 00:15:26.133 "unmap": true, 00:15:26.133 "write": true, 00:15:26.133 "write_zeroes": true 00:15:26.133 }, 00:15:26.133 "uuid": "cae25a6e-33bb-4876-aeec-d4a80070132f", 00:15:26.133 "zoned": false 00:15:26.133 }, 00:15:26.133 { 00:15:26.133 "aliases": [ 00:15:26.133 "c6596654-3945-5874-a54d-b7f59a7cfc61" 00:15:26.133 ], 00:15:26.133 "assigned_rate_limits": { 00:15:26.133 "r_mbytes_per_sec": 0, 00:15:26.133 "rw_ios_per_sec": 0, 00:15:26.133 "rw_mbytes_per_sec": 0, 00:15:26.133 "w_mbytes_per_sec": 0 00:15:26.133 }, 00:15:26.133 "block_size": 512, 00:15:26.133 "claimed": false, 00:15:26.133 "driver_specific": { 00:15:26.133 "passthru": { 00:15:26.133 "base_bdev_name": "Malloc0", 00:15:26.133 "name": "Passthru0" 00:15:26.133 } 00:15:26.133 }, 00:15:26.133 "memory_domains": [ 00:15:26.133 { 00:15:26.133 "dma_device_id": "system", 00:15:26.133 "dma_device_type": 1 00:15:26.133 }, 00:15:26.133 { 00:15:26.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.133 "dma_device_type": 2 00:15:26.133 } 00:15:26.133 ], 00:15:26.133 "name": "Passthru0", 00:15:26.133 "num_blocks": 16384, 00:15:26.133 "product_name": "passthru", 00:15:26.133 "supported_io_types": { 00:15:26.133 "abort": true, 00:15:26.133 "compare": false, 00:15:26.133 "compare_and_write": false, 00:15:26.133 "flush": true, 00:15:26.133 "nvme_admin": false, 00:15:26.133 "nvme_io": false, 00:15:26.133 "read": true, 00:15:26.133 "reset": true, 00:15:26.133 "unmap": true, 00:15:26.133 "write": true, 00:15:26.133 "write_zeroes": true 00:15:26.133 }, 00:15:26.133 "uuid": "c6596654-3945-5874-a54d-b7f59a7cfc61", 00:15:26.133 "zoned": false 00:15:26.133 } 00:15:26.133 ]' 00:15:26.133 13:25:43 -- rpc/rpc.sh@21 -- # jq length 00:15:26.133 13:25:43 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:26.133 13:25:43 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:26.133 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.133 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.133 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.133 13:25:43 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:26.133 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.133 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.133 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.133 13:25:43 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:26.133 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.133 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.133 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.133 13:25:43 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:26.133 13:25:43 -- rpc/rpc.sh@26 -- # jq length 00:15:26.133 13:25:43 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:26.133 00:15:26.133 real 0m0.323s 00:15:26.133 user 0m0.195s 00:15:26.133 sys 0m0.044s 00:15:26.133 13:25:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.133 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.133 ************************************ 00:15:26.133 END TEST rpc_integrity 00:15:26.133 ************************************ 00:15:26.133 13:25:43 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:26.133 13:25:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:26.133 13:25:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.133 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 ************************************ 00:15:26.391 START TEST rpc_plugins 00:15:26.391 ************************************ 00:15:26.391 13:25:43 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:15:26.391 13:25:43 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:26.391 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.391 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.391 13:25:43 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:26.391 13:25:43 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:26.391 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.391 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.391 13:25:43 -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:26.391 { 00:15:26.391 "aliases": [ 00:15:26.391 "10486469-de0f-4d98-acb2-569f6670e319" 00:15:26.391 ], 00:15:26.391 "assigned_rate_limits": { 00:15:26.391 "r_mbytes_per_sec": 0, 00:15:26.391 "rw_ios_per_sec": 0, 00:15:26.391 "rw_mbytes_per_sec": 0, 00:15:26.391 "w_mbytes_per_sec": 0 00:15:26.391 }, 00:15:26.391 "block_size": 4096, 00:15:26.391 "claimed": false, 00:15:26.391 "driver_specific": {}, 00:15:26.391 "memory_domains": [ 00:15:26.391 { 00:15:26.391 "dma_device_id": "system", 00:15:26.391 "dma_device_type": 1 00:15:26.391 }, 00:15:26.391 { 00:15:26.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.391 "dma_device_type": 2 00:15:26.391 } 00:15:26.391 ], 00:15:26.391 "name": "Malloc1", 00:15:26.391 "num_blocks": 256, 00:15:26.391 "product_name": "Malloc disk", 00:15:26.391 "supported_io_types": { 00:15:26.391 "abort": true, 00:15:26.391 "compare": false, 00:15:26.391 "compare_and_write": false, 00:15:26.391 "flush": true, 00:15:26.391 "nvme_admin": false, 00:15:26.391 "nvme_io": false, 00:15:26.391 "read": true, 00:15:26.391 "reset": true, 00:15:26.391 "unmap": true, 00:15:26.391 "write": true, 00:15:26.391 "write_zeroes": true 00:15:26.391 }, 00:15:26.391 "uuid": "10486469-de0f-4d98-acb2-569f6670e319", 00:15:26.391 "zoned": false 00:15:26.391 } 00:15:26.391 ]' 00:15:26.391 13:25:43 -- rpc/rpc.sh@32 -- # jq length 00:15:26.391 13:25:43 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:26.391 13:25:43 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:26.391 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.391 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.391 13:25:43 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:26.391 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.391 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.391 13:25:43 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:26.391 13:25:43 -- rpc/rpc.sh@36 -- # jq length 00:15:26.391 13:25:43 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:26.391 00:15:26.391 real 0m0.167s 00:15:26.391 user 0m0.106s 00:15:26.391 sys 0m0.019s 00:15:26.391 13:25:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.391 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.391 ************************************ 00:15:26.391 END TEST rpc_plugins 00:15:26.391 ************************************ 00:15:26.391 13:25:43 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:26.391 13:25:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:26.391 13:25:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.391 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.650 ************************************ 00:15:26.650 START TEST rpc_trace_cmd_test 00:15:26.650 ************************************ 00:15:26.650 13:25:43 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:15:26.650 13:25:43 -- rpc/rpc.sh@40 -- # local info 00:15:26.650 13:25:43 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:26.650 13:25:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.650 13:25:43 -- common/autotest_common.sh@10 -- # set +x 00:15:26.650 13:25:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.650 13:25:43 -- rpc/rpc.sh@42 -- # info='{ 00:15:26.650 "bdev": { 00:15:26.650 "mask": "0x8", 00:15:26.650 "tpoint_mask": "0xffffffffffffffff" 00:15:26.650 }, 00:15:26.650 "bdev_nvme": { 00:15:26.650 "mask": "0x4000", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "blobfs": { 00:15:26.650 "mask": "0x80", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "dsa": { 00:15:26.650 "mask": "0x200", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "ftl": { 00:15:26.650 "mask": "0x40", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "iaa": { 00:15:26.650 "mask": "0x1000", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "iscsi_conn": { 00:15:26.650 "mask": "0x2", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "nvme_pcie": { 00:15:26.650 "mask": "0x800", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "nvme_tcp": { 00:15:26.650 "mask": "0x2000", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "nvmf_rdma": { 00:15:26.650 "mask": "0x10", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "nvmf_tcp": { 00:15:26.650 "mask": "0x20", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "scsi": { 00:15:26.650 "mask": "0x4", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "sock": { 00:15:26.650 "mask": "0x8000", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "thread": { 00:15:26.650 "mask": "0x400", 00:15:26.650 "tpoint_mask": "0x0" 00:15:26.650 }, 00:15:26.650 "tpoint_group_mask": "0x8", 00:15:26.650 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60135" 00:15:26.650 }' 00:15:26.650 13:25:43 -- rpc/rpc.sh@43 -- # jq length 00:15:26.650 13:25:43 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:15:26.650 13:25:43 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:26.650 13:25:44 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:26.650 13:25:44 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:26.650 13:25:44 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:26.650 13:25:44 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:26.909 13:25:44 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:26.909 13:25:44 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:26.909 13:25:44 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:26.909 00:15:26.909 real 0m0.282s 00:15:26.909 user 0m0.240s 00:15:26.909 sys 0m0.028s 00:15:26.909 13:25:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:26.909 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:26.910 ************************************ 00:15:26.910 END TEST rpc_trace_cmd_test 00:15:26.910 ************************************ 00:15:26.910 13:25:44 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:15:26.910 13:25:44 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:15:26.910 13:25:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:26.910 13:25:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.910 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:26.910 ************************************ 00:15:26.910 START TEST go_rpc 00:15:26.910 ************************************ 00:15:26.910 13:25:44 -- common/autotest_common.sh@1111 -- # go_rpc 00:15:26.910 13:25:44 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:15:26.910 13:25:44 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:15:26.910 13:25:44 -- rpc/rpc.sh@52 -- # jq length 00:15:26.910 13:25:44 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:15:26.910 13:25:44 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:15:26.910 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.910 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.168 13:25:44 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:15:27.168 13:25:44 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:15:27.168 13:25:44 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["d8dd8b75-1293-41bc-b903-17bbf374fb2a"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"d8dd8b75-1293-41bc-b903-17bbf374fb2a","zoned":false}]' 00:15:27.168 13:25:44 -- rpc/rpc.sh@57 -- # jq length 00:15:27.168 13:25:44 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:15:27.168 13:25:44 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:27.168 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.168 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.168 13:25:44 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:15:27.168 13:25:44 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:15:27.168 13:25:44 -- rpc/rpc.sh@61 -- # jq length 00:15:27.168 13:25:44 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:15:27.168 00:15:27.168 real 0m0.244s 00:15:27.168 user 0m0.157s 00:15:27.168 sys 0m0.042s 00:15:27.168 13:25:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:27.168 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.168 ************************************ 00:15:27.168 END TEST go_rpc 00:15:27.168 ************************************ 00:15:27.168 13:25:44 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:27.168 13:25:44 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:27.168 13:25:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:27.168 13:25:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:27.168 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 ************************************ 00:15:27.427 START TEST rpc_daemon_integrity 00:15:27.427 ************************************ 00:15:27.427 13:25:44 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:15:27.427 13:25:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:27.427 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.427 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.427 13:25:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:27.427 13:25:44 -- rpc/rpc.sh@13 -- # jq length 00:15:27.427 13:25:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:27.427 13:25:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:27.427 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.427 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.427 13:25:44 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:15:27.427 13:25:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:27.427 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.427 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.427 13:25:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:27.427 { 00:15:27.427 "aliases": [ 00:15:27.427 "3fe55fde-c391-45f0-a4d6-4adb77994932" 00:15:27.427 ], 00:15:27.427 "assigned_rate_limits": { 00:15:27.427 "r_mbytes_per_sec": 0, 00:15:27.427 "rw_ios_per_sec": 0, 00:15:27.427 "rw_mbytes_per_sec": 0, 00:15:27.427 "w_mbytes_per_sec": 0 00:15:27.427 }, 00:15:27.427 "block_size": 512, 00:15:27.427 "claimed": false, 00:15:27.427 "driver_specific": {}, 00:15:27.427 "memory_domains": [ 00:15:27.427 { 00:15:27.427 "dma_device_id": "system", 00:15:27.427 "dma_device_type": 1 00:15:27.427 }, 00:15:27.427 { 00:15:27.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.427 "dma_device_type": 2 00:15:27.427 } 00:15:27.427 ], 00:15:27.427 "name": "Malloc3", 00:15:27.427 "num_blocks": 16384, 00:15:27.427 "product_name": "Malloc disk", 00:15:27.427 "supported_io_types": { 00:15:27.427 "abort": true, 00:15:27.427 "compare": false, 00:15:27.427 "compare_and_write": false, 00:15:27.427 "flush": true, 00:15:27.427 "nvme_admin": false, 00:15:27.427 "nvme_io": false, 00:15:27.427 "read": true, 00:15:27.427 "reset": true, 00:15:27.427 "unmap": true, 00:15:27.427 "write": true, 00:15:27.427 "write_zeroes": true 00:15:27.427 }, 00:15:27.427 "uuid": "3fe55fde-c391-45f0-a4d6-4adb77994932", 00:15:27.427 "zoned": false 00:15:27.427 } 00:15:27.427 ]' 00:15:27.427 13:25:44 -- rpc/rpc.sh@17 -- # jq length 00:15:27.427 13:25:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:27.427 13:25:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:15:27.427 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.427 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 [2024-04-26 13:25:44.817004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:27.427 [2024-04-26 13:25:44.817067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.427 [2024-04-26 13:25:44.817089] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e3bdc0 00:15:27.427 [2024-04-26 13:25:44.817099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.427 [2024-04-26 13:25:44.818515] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.427 [2024-04-26 13:25:44.818562] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:27.427 Passthru0 00:15:27.427 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.427 13:25:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:27.427 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.427 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.427 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.427 13:25:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:27.427 { 00:15:27.427 "aliases": [ 00:15:27.427 "3fe55fde-c391-45f0-a4d6-4adb77994932" 00:15:27.427 ], 00:15:27.427 "assigned_rate_limits": { 00:15:27.427 "r_mbytes_per_sec": 0, 00:15:27.427 "rw_ios_per_sec": 0, 00:15:27.427 "rw_mbytes_per_sec": 0, 00:15:27.427 "w_mbytes_per_sec": 0 00:15:27.427 }, 00:15:27.427 "block_size": 512, 00:15:27.427 "claim_type": "exclusive_write", 00:15:27.427 "claimed": true, 00:15:27.427 "driver_specific": {}, 00:15:27.427 "memory_domains": [ 00:15:27.427 { 00:15:27.427 "dma_device_id": "system", 00:15:27.427 "dma_device_type": 1 00:15:27.427 }, 00:15:27.427 { 00:15:27.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.427 "dma_device_type": 2 00:15:27.427 } 00:15:27.427 ], 00:15:27.427 "name": "Malloc3", 00:15:27.427 "num_blocks": 16384, 00:15:27.427 "product_name": "Malloc disk", 00:15:27.427 "supported_io_types": { 00:15:27.427 "abort": true, 00:15:27.427 "compare": false, 00:15:27.427 "compare_and_write": false, 00:15:27.427 "flush": true, 00:15:27.427 "nvme_admin": false, 00:15:27.427 "nvme_io": false, 00:15:27.427 "read": true, 00:15:27.427 "reset": true, 00:15:27.427 "unmap": true, 00:15:27.427 "write": true, 00:15:27.427 "write_zeroes": true 00:15:27.427 }, 00:15:27.427 "uuid": "3fe55fde-c391-45f0-a4d6-4adb77994932", 00:15:27.427 "zoned": false 00:15:27.427 }, 00:15:27.427 { 00:15:27.427 "aliases": [ 00:15:27.427 "bcaae5f5-20c4-56a8-8308-0a5031466a61" 00:15:27.427 ], 00:15:27.427 "assigned_rate_limits": { 00:15:27.427 "r_mbytes_per_sec": 0, 00:15:27.427 "rw_ios_per_sec": 0, 00:15:27.427 "rw_mbytes_per_sec": 0, 00:15:27.427 "w_mbytes_per_sec": 0 00:15:27.427 }, 00:15:27.427 "block_size": 512, 00:15:27.427 "claimed": false, 00:15:27.427 "driver_specific": { 00:15:27.427 "passthru": { 00:15:27.427 "base_bdev_name": "Malloc3", 00:15:27.427 "name": "Passthru0" 00:15:27.427 } 00:15:27.427 }, 00:15:27.427 "memory_domains": [ 00:15:27.427 { 00:15:27.427 "dma_device_id": "system", 00:15:27.427 "dma_device_type": 1 00:15:27.427 }, 00:15:27.427 { 00:15:27.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.427 "dma_device_type": 2 00:15:27.427 } 00:15:27.427 ], 00:15:27.427 "name": "Passthru0", 00:15:27.427 "num_blocks": 16384, 00:15:27.427 "product_name": "passthru", 00:15:27.427 "supported_io_types": { 00:15:27.427 "abort": true, 00:15:27.427 "compare": false, 00:15:27.427 "compare_and_write": false, 00:15:27.427 "flush": true, 00:15:27.427 "nvme_admin": false, 00:15:27.427 "nvme_io": false, 00:15:27.427 "read": true, 00:15:27.427 "reset": true, 00:15:27.427 "unmap": true, 00:15:27.427 "write": true, 00:15:27.427 "write_zeroes": true 00:15:27.427 }, 00:15:27.428 "uuid": "bcaae5f5-20c4-56a8-8308-0a5031466a61", 00:15:27.428 "zoned": false 00:15:27.428 } 00:15:27.428 ]' 00:15:27.428 13:25:44 -- rpc/rpc.sh@21 -- # jq length 00:15:27.687 13:25:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:27.687 13:25:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:27.687 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.687 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.687 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.687 13:25:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:27.687 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.687 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.687 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.687 13:25:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:27.687 13:25:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.687 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.687 13:25:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.687 13:25:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:27.687 13:25:44 -- rpc/rpc.sh@26 -- # jq length 00:15:27.687 13:25:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:27.687 00:15:27.687 real 0m0.337s 00:15:27.687 user 0m0.222s 00:15:27.687 sys 0m0.033s 00:15:27.687 13:25:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:27.687 13:25:44 -- common/autotest_common.sh@10 -- # set +x 00:15:27.687 ************************************ 00:15:27.687 END TEST rpc_daemon_integrity 00:15:27.687 ************************************ 00:15:27.687 13:25:45 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:27.687 13:25:45 -- rpc/rpc.sh@84 -- # killprocess 60135 00:15:27.687 13:25:45 -- common/autotest_common.sh@936 -- # '[' -z 60135 ']' 00:15:27.687 13:25:45 -- common/autotest_common.sh@940 -- # kill -0 60135 00:15:27.687 13:25:45 -- common/autotest_common.sh@941 -- # uname 00:15:27.687 13:25:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:27.687 13:25:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60135 00:15:27.687 13:25:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:27.687 13:25:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:27.687 killing process with pid 60135 00:15:27.687 13:25:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60135' 00:15:27.687 13:25:45 -- common/autotest_common.sh@955 -- # kill 60135 00:15:27.687 13:25:45 -- common/autotest_common.sh@960 -- # wait 60135 00:15:28.293 00:15:28.293 real 0m3.697s 00:15:28.293 user 0m4.869s 00:15:28.293 sys 0m0.991s 00:15:28.293 13:25:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:28.293 13:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:28.293 ************************************ 00:15:28.293 END TEST rpc 00:15:28.293 ************************************ 00:15:28.293 13:25:45 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:28.293 13:25:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:28.293 13:25:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.293 13:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:28.293 ************************************ 00:15:28.293 START TEST skip_rpc 00:15:28.293 ************************************ 00:15:28.293 13:25:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:28.293 * Looking for test storage... 00:15:28.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:28.293 13:25:45 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:28.293 13:25:45 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:28.293 13:25:45 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:28.293 13:25:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:28.293 13:25:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.293 13:25:45 -- common/autotest_common.sh@10 -- # set +x 00:15:28.552 ************************************ 00:15:28.552 START TEST skip_rpc 00:15:28.552 ************************************ 00:15:28.552 13:25:45 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:15:28.552 13:25:45 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60439 00:15:28.552 13:25:45 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:28.552 13:25:45 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:28.552 13:25:45 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:28.552 [2024-04-26 13:25:45.863218] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:28.552 [2024-04-26 13:25:45.863350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60439 ] 00:15:28.812 [2024-04-26 13:25:46.002380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.812 [2024-04-26 13:25:46.111189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.082 13:25:50 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:34.082 13:25:50 -- common/autotest_common.sh@638 -- # local es=0 00:15:34.082 13:25:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:34.082 13:25:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:34.082 13:25:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:34.082 13:25:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:34.082 13:25:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:34.082 13:25:50 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:15:34.082 13:25:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:34.082 13:25:50 -- common/autotest_common.sh@10 -- # set +x 00:15:34.082 2024/04/26 13:25:50 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:15:34.082 13:25:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:34.082 13:25:50 -- common/autotest_common.sh@641 -- # es=1 00:15:34.082 13:25:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:34.082 13:25:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:34.082 13:25:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:34.082 13:25:50 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:34.082 13:25:50 -- rpc/skip_rpc.sh@23 -- # killprocess 60439 00:15:34.082 13:25:50 -- common/autotest_common.sh@936 -- # '[' -z 60439 ']' 00:15:34.082 13:25:50 -- common/autotest_common.sh@940 -- # kill -0 60439 00:15:34.082 13:25:50 -- common/autotest_common.sh@941 -- # uname 00:15:34.082 13:25:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.082 13:25:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60439 00:15:34.082 13:25:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:34.082 13:25:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:34.082 killing process with pid 60439 00:15:34.082 13:25:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60439' 00:15:34.082 13:25:50 -- common/autotest_common.sh@955 -- # kill 60439 00:15:34.082 13:25:50 -- common/autotest_common.sh@960 -- # wait 60439 00:15:34.082 00:15:34.082 real 0m5.472s 00:15:34.082 user 0m5.090s 00:15:34.082 sys 0m0.283s 00:15:34.082 13:25:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:34.082 13:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:34.082 ************************************ 00:15:34.082 END TEST skip_rpc 00:15:34.082 ************************************ 00:15:34.082 13:25:51 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:34.082 13:25:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:34.082 13:25:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:34.082 13:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:34.082 ************************************ 00:15:34.082 START TEST skip_rpc_with_json 00:15:34.082 ************************************ 00:15:34.082 13:25:51 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:15:34.082 13:25:51 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:34.082 13:25:51 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60530 00:15:34.082 13:25:51 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:34.082 13:25:51 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:34.082 13:25:51 -- rpc/skip_rpc.sh@31 -- # waitforlisten 60530 00:15:34.082 13:25:51 -- common/autotest_common.sh@817 -- # '[' -z 60530 ']' 00:15:34.082 13:25:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.082 13:25:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:34.082 13:25:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.082 13:25:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:34.082 13:25:51 -- common/autotest_common.sh@10 -- # set +x 00:15:34.082 [2024-04-26 13:25:51.457049] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:34.083 [2024-04-26 13:25:51.457164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60530 ] 00:15:34.341 [2024-04-26 13:25:51.598823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.341 [2024-04-26 13:25:51.726725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.279 13:25:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:35.279 13:25:52 -- common/autotest_common.sh@850 -- # return 0 00:15:35.279 13:25:52 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:35.279 13:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:35.279 13:25:52 -- common/autotest_common.sh@10 -- # set +x 00:15:35.279 [2024-04-26 13:25:52.448761] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:35.279 2024/04/26 13:25:52 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:15:35.279 request: 00:15:35.279 { 00:15:35.279 "method": "nvmf_get_transports", 00:15:35.279 "params": { 00:15:35.279 "trtype": "tcp" 00:15:35.279 } 00:15:35.279 } 00:15:35.279 Got JSON-RPC error response 00:15:35.279 GoRPCClient: error on JSON-RPC call 00:15:35.279 13:25:52 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:35.279 13:25:52 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:35.279 13:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:35.279 13:25:52 -- common/autotest_common.sh@10 -- # set +x 00:15:35.279 [2024-04-26 13:25:52.460878] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.279 13:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:35.279 13:25:52 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:35.279 13:25:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:35.279 13:25:52 -- common/autotest_common.sh@10 -- # set +x 00:15:35.279 13:25:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:35.279 13:25:52 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:35.279 { 00:15:35.279 "subsystems": [ 00:15:35.279 { 00:15:35.279 "subsystem": "keyring", 00:15:35.279 "config": [] 00:15:35.279 }, 00:15:35.279 { 00:15:35.279 "subsystem": "iobuf", 00:15:35.279 "config": [ 00:15:35.279 { 00:15:35.279 "method": "iobuf_set_options", 00:15:35.279 "params": { 00:15:35.279 "large_bufsize": 135168, 00:15:35.279 "large_pool_count": 1024, 00:15:35.279 "small_bufsize": 8192, 00:15:35.279 "small_pool_count": 8192 00:15:35.279 } 00:15:35.279 } 00:15:35.279 ] 00:15:35.279 }, 00:15:35.279 { 00:15:35.279 "subsystem": "sock", 00:15:35.279 "config": [ 00:15:35.279 { 00:15:35.279 "method": "sock_impl_set_options", 00:15:35.279 "params": { 00:15:35.279 "enable_ktls": false, 00:15:35.279 "enable_placement_id": 0, 00:15:35.279 "enable_quickack": false, 00:15:35.279 "enable_recv_pipe": true, 00:15:35.279 "enable_zerocopy_send_client": false, 00:15:35.279 "enable_zerocopy_send_server": true, 00:15:35.279 "impl_name": "posix", 00:15:35.279 "recv_buf_size": 2097152, 00:15:35.279 "send_buf_size": 2097152, 00:15:35.279 "tls_version": 0, 00:15:35.279 "zerocopy_threshold": 0 00:15:35.279 } 00:15:35.279 }, 00:15:35.279 { 00:15:35.279 "method": "sock_impl_set_options", 00:15:35.279 "params": { 00:15:35.279 "enable_ktls": false, 00:15:35.279 "enable_placement_id": 0, 00:15:35.279 "enable_quickack": false, 00:15:35.279 "enable_recv_pipe": true, 00:15:35.279 "enable_zerocopy_send_client": false, 00:15:35.279 "enable_zerocopy_send_server": true, 00:15:35.279 "impl_name": "ssl", 00:15:35.279 "recv_buf_size": 4096, 00:15:35.279 "send_buf_size": 4096, 00:15:35.279 "tls_version": 0, 00:15:35.279 "zerocopy_threshold": 0 00:15:35.279 } 00:15:35.279 } 00:15:35.279 ] 00:15:35.279 }, 00:15:35.279 { 00:15:35.279 "subsystem": "vmd", 00:15:35.279 "config": [] 00:15:35.279 }, 00:15:35.279 { 00:15:35.279 "subsystem": "accel", 00:15:35.279 "config": [ 00:15:35.279 { 00:15:35.279 "method": "accel_set_options", 00:15:35.279 "params": { 00:15:35.280 "buf_count": 2048, 00:15:35.280 "large_cache_size": 16, 00:15:35.280 "sequence_count": 2048, 00:15:35.280 "small_cache_size": 128, 00:15:35.280 "task_count": 2048 00:15:35.280 } 00:15:35.280 } 00:15:35.280 ] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "bdev", 00:15:35.280 "config": [ 00:15:35.280 { 00:15:35.280 "method": "bdev_set_options", 00:15:35.280 "params": { 00:15:35.280 "bdev_auto_examine": true, 00:15:35.280 "bdev_io_cache_size": 256, 00:15:35.280 "bdev_io_pool_size": 65535, 00:15:35.280 "iobuf_large_cache_size": 16, 00:15:35.280 "iobuf_small_cache_size": 128 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "bdev_raid_set_options", 00:15:35.280 "params": { 00:15:35.280 "process_window_size_kb": 1024 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "bdev_iscsi_set_options", 00:15:35.280 "params": { 00:15:35.280 "timeout_sec": 30 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "bdev_nvme_set_options", 00:15:35.280 "params": { 00:15:35.280 "action_on_timeout": "none", 00:15:35.280 "allow_accel_sequence": false, 00:15:35.280 "arbitration_burst": 0, 00:15:35.280 "bdev_retry_count": 3, 00:15:35.280 "ctrlr_loss_timeout_sec": 0, 00:15:35.280 "delay_cmd_submit": true, 00:15:35.280 "dhchap_dhgroups": [ 00:15:35.280 "null", 00:15:35.280 "ffdhe2048", 00:15:35.280 "ffdhe3072", 00:15:35.280 "ffdhe4096", 00:15:35.280 "ffdhe6144", 00:15:35.280 "ffdhe8192" 00:15:35.280 ], 00:15:35.280 "dhchap_digests": [ 00:15:35.280 "sha256", 00:15:35.280 "sha384", 00:15:35.280 "sha512" 00:15:35.280 ], 00:15:35.280 "disable_auto_failback": false, 00:15:35.280 "fast_io_fail_timeout_sec": 0, 00:15:35.280 "generate_uuids": false, 00:15:35.280 "high_priority_weight": 0, 00:15:35.280 "io_path_stat": false, 00:15:35.280 "io_queue_requests": 0, 00:15:35.280 "keep_alive_timeout_ms": 10000, 00:15:35.280 "low_priority_weight": 0, 00:15:35.280 "medium_priority_weight": 0, 00:15:35.280 "nvme_adminq_poll_period_us": 10000, 00:15:35.280 "nvme_error_stat": false, 00:15:35.280 "nvme_ioq_poll_period_us": 0, 00:15:35.280 "rdma_cm_event_timeout_ms": 0, 00:15:35.280 "rdma_max_cq_size": 0, 00:15:35.280 "rdma_srq_size": 0, 00:15:35.280 "reconnect_delay_sec": 0, 00:15:35.280 "timeout_admin_us": 0, 00:15:35.280 "timeout_us": 0, 00:15:35.280 "transport_ack_timeout": 0, 00:15:35.280 "transport_retry_count": 4, 00:15:35.280 "transport_tos": 0 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "bdev_nvme_set_hotplug", 00:15:35.280 "params": { 00:15:35.280 "enable": false, 00:15:35.280 "period_us": 100000 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "bdev_wait_for_examine" 00:15:35.280 } 00:15:35.280 ] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "scsi", 00:15:35.280 "config": null 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "scheduler", 00:15:35.280 "config": [ 00:15:35.280 { 00:15:35.280 "method": "framework_set_scheduler", 00:15:35.280 "params": { 00:15:35.280 "name": "static" 00:15:35.280 } 00:15:35.280 } 00:15:35.280 ] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "vhost_scsi", 00:15:35.280 "config": [] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "vhost_blk", 00:15:35.280 "config": [] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "ublk", 00:15:35.280 "config": [] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "nbd", 00:15:35.280 "config": [] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "nvmf", 00:15:35.280 "config": [ 00:15:35.280 { 00:15:35.280 "method": "nvmf_set_config", 00:15:35.280 "params": { 00:15:35.280 "admin_cmd_passthru": { 00:15:35.280 "identify_ctrlr": false 00:15:35.280 }, 00:15:35.280 "discovery_filter": "match_any" 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "nvmf_set_max_subsystems", 00:15:35.280 "params": { 00:15:35.280 "max_subsystems": 1024 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "nvmf_set_crdt", 00:15:35.280 "params": { 00:15:35.280 "crdt1": 0, 00:15:35.280 "crdt2": 0, 00:15:35.280 "crdt3": 0 00:15:35.280 } 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "method": "nvmf_create_transport", 00:15:35.280 "params": { 00:15:35.280 "abort_timeout_sec": 1, 00:15:35.280 "ack_timeout": 0, 00:15:35.280 "buf_cache_size": 4294967295, 00:15:35.280 "c2h_success": true, 00:15:35.280 "data_wr_pool_size": 0, 00:15:35.280 "dif_insert_or_strip": false, 00:15:35.280 "in_capsule_data_size": 4096, 00:15:35.280 "io_unit_size": 131072, 00:15:35.280 "max_aq_depth": 128, 00:15:35.280 "max_io_qpairs_per_ctrlr": 127, 00:15:35.280 "max_io_size": 131072, 00:15:35.280 "max_queue_depth": 128, 00:15:35.280 "num_shared_buffers": 511, 00:15:35.280 "sock_priority": 0, 00:15:35.280 "trtype": "TCP", 00:15:35.280 "zcopy": false 00:15:35.280 } 00:15:35.280 } 00:15:35.280 ] 00:15:35.280 }, 00:15:35.280 { 00:15:35.280 "subsystem": "iscsi", 00:15:35.280 "config": [ 00:15:35.280 { 00:15:35.280 "method": "iscsi_set_options", 00:15:35.280 "params": { 00:15:35.280 "allow_duplicated_isid": false, 00:15:35.280 "chap_group": 0, 00:15:35.280 "data_out_pool_size": 2048, 00:15:35.280 "default_time2retain": 20, 00:15:35.280 "default_time2wait": 2, 00:15:35.280 "disable_chap": false, 00:15:35.280 "error_recovery_level": 0, 00:15:35.280 "first_burst_length": 8192, 00:15:35.280 "immediate_data": true, 00:15:35.280 "immediate_data_pool_size": 16384, 00:15:35.280 "max_connections_per_session": 2, 00:15:35.280 "max_large_datain_per_connection": 64, 00:15:35.280 "max_queue_depth": 64, 00:15:35.280 "max_r2t_per_connection": 4, 00:15:35.280 "max_sessions": 128, 00:15:35.280 "mutual_chap": false, 00:15:35.280 "node_base": "iqn.2016-06.io.spdk", 00:15:35.280 "nop_in_interval": 30, 00:15:35.280 "nop_timeout": 60, 00:15:35.280 "pdu_pool_size": 36864, 00:15:35.280 "require_chap": false 00:15:35.280 } 00:15:35.280 } 00:15:35.280 ] 00:15:35.280 } 00:15:35.280 ] 00:15:35.280 } 00:15:35.280 13:25:52 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:35.280 13:25:52 -- rpc/skip_rpc.sh@40 -- # killprocess 60530 00:15:35.280 13:25:52 -- common/autotest_common.sh@936 -- # '[' -z 60530 ']' 00:15:35.280 13:25:52 -- common/autotest_common.sh@940 -- # kill -0 60530 00:15:35.280 13:25:52 -- common/autotest_common.sh@941 -- # uname 00:15:35.280 13:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:35.280 13:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60530 00:15:35.280 13:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:35.280 killing process with pid 60530 00:15:35.280 13:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:35.280 13:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60530' 00:15:35.280 13:25:52 -- common/autotest_common.sh@955 -- # kill 60530 00:15:35.280 13:25:52 -- common/autotest_common.sh@960 -- # wait 60530 00:15:35.849 13:25:53 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60575 00:15:35.849 13:25:53 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:35.849 13:25:53 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:41.123 13:25:58 -- rpc/skip_rpc.sh@50 -- # killprocess 60575 00:15:41.123 13:25:58 -- common/autotest_common.sh@936 -- # '[' -z 60575 ']' 00:15:41.123 13:25:58 -- common/autotest_common.sh@940 -- # kill -0 60575 00:15:41.123 13:25:58 -- common/autotest_common.sh@941 -- # uname 00:15:41.123 13:25:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:41.123 13:25:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60575 00:15:41.123 13:25:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:41.123 killing process with pid 60575 00:15:41.123 13:25:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:41.123 13:25:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60575' 00:15:41.123 13:25:58 -- common/autotest_common.sh@955 -- # kill 60575 00:15:41.123 13:25:58 -- common/autotest_common.sh@960 -- # wait 60575 00:15:41.123 13:25:58 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:41.123 13:25:58 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:41.123 00:15:41.123 real 0m7.173s 00:15:41.123 user 0m6.894s 00:15:41.123 sys 0m0.682s 00:15:41.123 13:25:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.123 ************************************ 00:15:41.123 END TEST skip_rpc_with_json 00:15:41.123 ************************************ 00:15:41.123 13:25:58 -- common/autotest_common.sh@10 -- # set +x 00:15:41.382 13:25:58 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:41.382 13:25:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:41.382 13:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.382 13:25:58 -- common/autotest_common.sh@10 -- # set +x 00:15:41.382 ************************************ 00:15:41.382 START TEST skip_rpc_with_delay 00:15:41.382 ************************************ 00:15:41.382 13:25:58 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:15:41.382 13:25:58 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:41.382 13:25:58 -- common/autotest_common.sh@638 -- # local es=0 00:15:41.382 13:25:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:41.382 13:25:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.382 13:25:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:41.382 13:25:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.382 13:25:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:41.382 13:25:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.382 13:25:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:41.382 13:25:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:41.382 13:25:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:41.382 13:25:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:41.382 [2024-04-26 13:25:58.749358] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:41.382 [2024-04-26 13:25:58.749502] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:15:41.382 13:25:58 -- common/autotest_common.sh@641 -- # es=1 00:15:41.382 13:25:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:41.382 13:25:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:41.382 13:25:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:41.382 00:15:41.382 real 0m0.093s 00:15:41.382 user 0m0.055s 00:15:41.382 sys 0m0.037s 00:15:41.382 13:25:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.382 13:25:58 -- common/autotest_common.sh@10 -- # set +x 00:15:41.382 ************************************ 00:15:41.382 END TEST skip_rpc_with_delay 00:15:41.382 ************************************ 00:15:41.382 13:25:58 -- rpc/skip_rpc.sh@77 -- # uname 00:15:41.382 13:25:58 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:41.382 13:25:58 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:41.382 13:25:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:41.382 13:25:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.382 13:25:58 -- common/autotest_common.sh@10 -- # set +x 00:15:41.641 ************************************ 00:15:41.641 START TEST exit_on_failed_rpc_init 00:15:41.641 ************************************ 00:15:41.641 13:25:58 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:15:41.641 13:25:58 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60687 00:15:41.641 13:25:58 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:41.641 13:25:58 -- rpc/skip_rpc.sh@63 -- # waitforlisten 60687 00:15:41.641 13:25:58 -- common/autotest_common.sh@817 -- # '[' -z 60687 ']' 00:15:41.641 13:25:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.641 13:25:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:41.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.641 13:25:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.641 13:25:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:41.641 13:25:58 -- common/autotest_common.sh@10 -- # set +x 00:15:41.641 [2024-04-26 13:25:58.942246] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:41.641 [2024-04-26 13:25:58.942355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60687 ] 00:15:41.641 [2024-04-26 13:25:59.079925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.932 [2024-04-26 13:25:59.224992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.503 13:25:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:42.503 13:25:59 -- common/autotest_common.sh@850 -- # return 0 00:15:42.503 13:25:59 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:42.503 13:25:59 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:42.503 13:25:59 -- common/autotest_common.sh@638 -- # local es=0 00:15:42.503 13:25:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:42.503 13:25:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.503 13:25:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:42.503 13:25:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.503 13:25:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:42.503 13:25:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.503 13:25:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:42.503 13:25:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:42.503 13:25:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:42.503 13:25:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:42.761 [2024-04-26 13:26:00.018260] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:42.761 [2024-04-26 13:26:00.018379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60717 ] 00:15:42.761 [2024-04-26 13:26:00.157505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.019 [2024-04-26 13:26:00.285505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.019 [2024-04-26 13:26:00.285640] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:43.019 [2024-04-26 13:26:00.285659] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:43.019 [2024-04-26 13:26:00.285671] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:43.019 13:26:00 -- common/autotest_common.sh@641 -- # es=234 00:15:43.019 13:26:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:43.019 13:26:00 -- common/autotest_common.sh@650 -- # es=106 00:15:43.019 13:26:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:15:43.019 13:26:00 -- common/autotest_common.sh@658 -- # es=1 00:15:43.019 13:26:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:43.019 13:26:00 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:43.019 13:26:00 -- rpc/skip_rpc.sh@70 -- # killprocess 60687 00:15:43.019 13:26:00 -- common/autotest_common.sh@936 -- # '[' -z 60687 ']' 00:15:43.019 13:26:00 -- common/autotest_common.sh@940 -- # kill -0 60687 00:15:43.019 13:26:00 -- common/autotest_common.sh@941 -- # uname 00:15:43.019 13:26:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:43.019 13:26:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60687 00:15:43.019 13:26:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:43.019 killing process with pid 60687 00:15:43.019 13:26:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:43.019 13:26:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60687' 00:15:43.019 13:26:00 -- common/autotest_common.sh@955 -- # kill 60687 00:15:43.019 13:26:00 -- common/autotest_common.sh@960 -- # wait 60687 00:15:43.586 00:15:43.586 real 0m1.997s 00:15:43.586 user 0m2.376s 00:15:43.586 sys 0m0.459s 00:15:43.586 13:26:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:43.586 13:26:00 -- common/autotest_common.sh@10 -- # set +x 00:15:43.586 ************************************ 00:15:43.586 END TEST exit_on_failed_rpc_init 00:15:43.586 ************************************ 00:15:43.586 13:26:00 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:43.586 00:15:43.586 real 0m15.285s 00:15:43.586 user 0m14.611s 00:15:43.586 sys 0m1.754s 00:15:43.586 13:26:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:43.586 ************************************ 00:15:43.586 END TEST skip_rpc 00:15:43.586 ************************************ 00:15:43.586 13:26:00 -- common/autotest_common.sh@10 -- # set +x 00:15:43.586 13:26:00 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:43.586 13:26:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:43.586 13:26:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:43.586 13:26:00 -- common/autotest_common.sh@10 -- # set +x 00:15:43.586 ************************************ 00:15:43.586 START TEST rpc_client 00:15:43.586 ************************************ 00:15:43.586 13:26:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:43.845 * Looking for test storage... 00:15:43.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:43.845 13:26:01 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:43.845 OK 00:15:43.845 13:26:01 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:43.845 00:15:43.845 real 0m0.101s 00:15:43.845 user 0m0.042s 00:15:43.845 sys 0m0.064s 00:15:43.845 13:26:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:43.845 13:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:43.845 ************************************ 00:15:43.845 END TEST rpc_client 00:15:43.845 ************************************ 00:15:43.845 13:26:01 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:43.845 13:26:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:43.845 13:26:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:43.845 13:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:43.845 ************************************ 00:15:43.845 START TEST json_config 00:15:43.845 ************************************ 00:15:43.845 13:26:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:44.105 13:26:01 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:44.105 13:26:01 -- nvmf/common.sh@7 -- # uname -s 00:15:44.105 13:26:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.105 13:26:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.105 13:26:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.105 13:26:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.105 13:26:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.105 13:26:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.105 13:26:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.105 13:26:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.105 13:26:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.105 13:26:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.105 13:26:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:15:44.105 13:26:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:15:44.105 13:26:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.105 13:26:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.105 13:26:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:44.105 13:26:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.105 13:26:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:44.105 13:26:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.105 13:26:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.105 13:26:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.105 13:26:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.105 13:26:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.105 13:26:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.105 13:26:01 -- paths/export.sh@5 -- # export PATH 00:15:44.105 13:26:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.105 13:26:01 -- nvmf/common.sh@47 -- # : 0 00:15:44.105 13:26:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.105 13:26:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.105 13:26:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.105 13:26:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.105 13:26:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.105 13:26:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.105 13:26:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.105 13:26:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.105 13:26:01 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:44.105 13:26:01 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:44.105 13:26:01 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:44.105 13:26:01 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:44.105 13:26:01 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:44.105 13:26:01 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:15:44.105 13:26:01 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:15:44.105 13:26:01 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:15:44.105 13:26:01 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:15:44.105 13:26:01 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:15:44.105 13:26:01 -- json_config/json_config.sh@33 -- # declare -A app_params 00:15:44.105 13:26:01 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:15:44.105 13:26:01 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:15:44.105 13:26:01 -- json_config/json_config.sh@40 -- # last_event_id=0 00:15:44.105 13:26:01 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:44.105 INFO: JSON configuration test init 00:15:44.105 13:26:01 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:15:44.105 13:26:01 -- json_config/json_config.sh@357 -- # json_config_test_init 00:15:44.105 13:26:01 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:15:44.105 13:26:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:44.105 13:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:44.105 13:26:01 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:15:44.105 13:26:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:44.105 13:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:44.105 13:26:01 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:15:44.105 13:26:01 -- json_config/common.sh@9 -- # local app=target 00:15:44.105 13:26:01 -- json_config/common.sh@10 -- # shift 00:15:44.105 13:26:01 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:44.105 13:26:01 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:44.105 13:26:01 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:44.105 13:26:01 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:44.105 13:26:01 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:44.105 13:26:01 -- json_config/common.sh@22 -- # app_pid["$app"]=60856 00:15:44.105 Waiting for target to run... 00:15:44.105 13:26:01 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:15:44.105 13:26:01 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:44.105 13:26:01 -- json_config/common.sh@25 -- # waitforlisten 60856 /var/tmp/spdk_tgt.sock 00:15:44.105 13:26:01 -- common/autotest_common.sh@817 -- # '[' -z 60856 ']' 00:15:44.105 13:26:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:44.105 13:26:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:44.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:44.105 13:26:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:44.105 13:26:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:44.105 13:26:01 -- common/autotest_common.sh@10 -- # set +x 00:15:44.105 [2024-04-26 13:26:01.406199] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:44.105 [2024-04-26 13:26:01.406992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60856 ] 00:15:44.672 [2024-04-26 13:26:01.831640] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.672 [2024-04-26 13:26:01.923252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.931 13:26:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:44.931 13:26:02 -- common/autotest_common.sh@850 -- # return 0 00:15:44.931 00:15:44.931 13:26:02 -- json_config/common.sh@26 -- # echo '' 00:15:44.931 13:26:02 -- json_config/json_config.sh@269 -- # create_accel_config 00:15:44.931 13:26:02 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:15:44.931 13:26:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:44.931 13:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:44.931 13:26:02 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:15:44.931 13:26:02 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:15:44.931 13:26:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:44.931 13:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:45.190 13:26:02 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:15:45.190 13:26:02 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:15:45.190 13:26:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:15:45.450 13:26:02 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:15:45.450 13:26:02 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:15:45.450 13:26:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:45.450 13:26:02 -- common/autotest_common.sh@10 -- # set +x 00:15:45.450 13:26:02 -- json_config/json_config.sh@45 -- # local ret=0 00:15:45.450 13:26:02 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:15:45.450 13:26:02 -- json_config/json_config.sh@46 -- # local enabled_types 00:15:45.450 13:26:02 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:15:45.450 13:26:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:15:45.450 13:26:02 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:15:45.708 13:26:03 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:15:45.708 13:26:03 -- json_config/json_config.sh@48 -- # local get_types 00:15:45.708 13:26:03 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:15:45.708 13:26:03 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:15:45.708 13:26:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:45.708 13:26:03 -- common/autotest_common.sh@10 -- # set +x 00:15:45.967 13:26:03 -- json_config/json_config.sh@55 -- # return 0 00:15:45.967 13:26:03 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:15:45.967 13:26:03 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:15:45.967 13:26:03 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:15:45.967 13:26:03 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:15:45.967 13:26:03 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:15:45.967 13:26:03 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:15:45.967 13:26:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:45.967 13:26:03 -- common/autotest_common.sh@10 -- # set +x 00:15:45.967 13:26:03 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:15:45.967 13:26:03 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:15:45.967 13:26:03 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:15:45.967 13:26:03 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:45.967 13:26:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:45.967 MallocForNvmf0 00:15:46.225 13:26:03 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:46.225 13:26:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:46.484 MallocForNvmf1 00:15:46.484 13:26:03 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:15:46.484 13:26:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:15:46.484 [2024-04-26 13:26:03.925129] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.742 13:26:03 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.742 13:26:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:47.000 13:26:04 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:47.000 13:26:04 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:47.258 13:26:04 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:47.258 13:26:04 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:47.517 13:26:04 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:47.517 13:26:04 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:47.801 [2024-04-26 13:26:05.005723] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:47.801 13:26:05 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:15:47.801 13:26:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:47.801 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:15:47.801 13:26:05 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:15:47.801 13:26:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:47.801 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:15:47.801 13:26:05 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:15:47.801 13:26:05 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:47.801 13:26:05 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:48.060 MallocBdevForConfigChangeCheck 00:15:48.060 13:26:05 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:15:48.060 13:26:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:48.060 13:26:05 -- common/autotest_common.sh@10 -- # set +x 00:15:48.060 13:26:05 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:15:48.060 13:26:05 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:48.626 INFO: shutting down applications... 00:15:48.626 13:26:05 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:15:48.626 13:26:05 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:15:48.626 13:26:05 -- json_config/json_config.sh@368 -- # json_config_clear target 00:15:48.626 13:26:05 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:15:48.626 13:26:05 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:15:48.885 Calling clear_iscsi_subsystem 00:15:48.885 Calling clear_nvmf_subsystem 00:15:48.885 Calling clear_nbd_subsystem 00:15:48.885 Calling clear_ublk_subsystem 00:15:48.885 Calling clear_vhost_blk_subsystem 00:15:48.885 Calling clear_vhost_scsi_subsystem 00:15:48.885 Calling clear_bdev_subsystem 00:15:48.885 13:26:06 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:15:48.885 13:26:06 -- json_config/json_config.sh@343 -- # count=100 00:15:48.885 13:26:06 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:15:48.885 13:26:06 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:48.885 13:26:06 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:15:48.885 13:26:06 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:15:49.454 13:26:06 -- json_config/json_config.sh@345 -- # break 00:15:49.454 13:26:06 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:15:49.454 13:26:06 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:15:49.454 13:26:06 -- json_config/common.sh@31 -- # local app=target 00:15:49.454 13:26:06 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:49.454 13:26:06 -- json_config/common.sh@35 -- # [[ -n 60856 ]] 00:15:49.454 13:26:06 -- json_config/common.sh@38 -- # kill -SIGINT 60856 00:15:49.454 13:26:06 -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:49.454 13:26:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:49.454 13:26:06 -- json_config/common.sh@41 -- # kill -0 60856 00:15:49.454 13:26:06 -- json_config/common.sh@45 -- # sleep 0.5 00:15:49.713 13:26:07 -- json_config/common.sh@40 -- # (( i++ )) 00:15:49.713 13:26:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:49.713 13:26:07 -- json_config/common.sh@41 -- # kill -0 60856 00:15:49.713 13:26:07 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:49.713 13:26:07 -- json_config/common.sh@43 -- # break 00:15:49.713 13:26:07 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:49.713 SPDK target shutdown done 00:15:49.713 13:26:07 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:49.713 INFO: relaunching applications... 00:15:49.713 13:26:07 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:15:49.713 13:26:07 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:49.713 13:26:07 -- json_config/common.sh@9 -- # local app=target 00:15:49.713 13:26:07 -- json_config/common.sh@10 -- # shift 00:15:49.713 13:26:07 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:49.713 13:26:07 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:49.713 13:26:07 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:49.713 13:26:07 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:49.713 13:26:07 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:49.713 13:26:07 -- json_config/common.sh@22 -- # app_pid["$app"]=61132 00:15:49.713 13:26:07 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:49.713 Waiting for target to run... 00:15:49.713 13:26:07 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:49.713 13:26:07 -- json_config/common.sh@25 -- # waitforlisten 61132 /var/tmp/spdk_tgt.sock 00:15:49.713 13:26:07 -- common/autotest_common.sh@817 -- # '[' -z 61132 ']' 00:15:49.713 13:26:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:49.713 13:26:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:49.713 13:26:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:49.713 13:26:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.713 13:26:07 -- common/autotest_common.sh@10 -- # set +x 00:15:49.972 [2024-04-26 13:26:07.174801] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:49.972 [2024-04-26 13:26:07.174913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61132 ] 00:15:50.231 [2024-04-26 13:26:07.587762] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.500 [2024-04-26 13:26:07.681792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.776 [2024-04-26 13:26:07.992020] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.776 [2024-04-26 13:26:08.024104] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:50.776 13:26:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.776 00:15:50.776 13:26:08 -- common/autotest_common.sh@850 -- # return 0 00:15:50.776 13:26:08 -- json_config/common.sh@26 -- # echo '' 00:15:50.776 13:26:08 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:15:50.776 INFO: Checking if target configuration is the same... 00:15:50.776 13:26:08 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:15:50.776 13:26:08 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:50.776 13:26:08 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:15:50.776 13:26:08 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:50.776 + '[' 2 -ne 2 ']' 00:15:50.776 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:50.776 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:50.776 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:50.776 +++ basename /dev/fd/62 00:15:50.776 ++ mktemp /tmp/62.XXX 00:15:50.776 + tmp_file_1=/tmp/62.VBM 00:15:50.776 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:51.035 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:51.035 + tmp_file_2=/tmp/spdk_tgt_config.json.55h 00:15:51.035 + ret=0 00:15:51.035 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:51.294 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:51.294 + diff -u /tmp/62.VBM /tmp/spdk_tgt_config.json.55h 00:15:51.294 INFO: JSON config files are the same 00:15:51.294 + echo 'INFO: JSON config files are the same' 00:15:51.294 + rm /tmp/62.VBM /tmp/spdk_tgt_config.json.55h 00:15:51.294 + exit 0 00:15:51.294 13:26:08 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:15:51.294 INFO: changing configuration and checking if this can be detected... 00:15:51.294 13:26:08 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:15:51.294 13:26:08 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:51.294 13:26:08 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:51.554 13:26:08 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:15:51.554 13:26:08 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:51.554 13:26:08 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:51.554 + '[' 2 -ne 2 ']' 00:15:51.554 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:51.554 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:51.554 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:51.554 +++ basename /dev/fd/62 00:15:51.554 ++ mktemp /tmp/62.XXX 00:15:51.554 + tmp_file_1=/tmp/62.t2r 00:15:51.554 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:51.554 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:51.554 + tmp_file_2=/tmp/spdk_tgt_config.json.LaA 00:15:51.554 + ret=0 00:15:51.554 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:52.122 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:52.122 + diff -u /tmp/62.t2r /tmp/spdk_tgt_config.json.LaA 00:15:52.122 + ret=1 00:15:52.122 + echo '=== Start of file: /tmp/62.t2r ===' 00:15:52.122 + cat /tmp/62.t2r 00:15:52.122 + echo '=== End of file: /tmp/62.t2r ===' 00:15:52.122 + echo '' 00:15:52.122 + echo '=== Start of file: /tmp/spdk_tgt_config.json.LaA ===' 00:15:52.122 + cat /tmp/spdk_tgt_config.json.LaA 00:15:52.122 + echo '=== End of file: /tmp/spdk_tgt_config.json.LaA ===' 00:15:52.122 + echo '' 00:15:52.122 + rm /tmp/62.t2r /tmp/spdk_tgt_config.json.LaA 00:15:52.122 + exit 1 00:15:52.122 INFO: configuration change detected. 00:15:52.122 13:26:09 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:15:52.122 13:26:09 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:15:52.122 13:26:09 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:15:52.122 13:26:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:52.122 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 13:26:09 -- json_config/json_config.sh@307 -- # local ret=0 00:15:52.122 13:26:09 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:15:52.122 13:26:09 -- json_config/json_config.sh@317 -- # [[ -n 61132 ]] 00:15:52.122 13:26:09 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:15:52.122 13:26:09 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:15:52.122 13:26:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:52.122 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 13:26:09 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:15:52.122 13:26:09 -- json_config/json_config.sh@193 -- # uname -s 00:15:52.122 13:26:09 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:15:52.122 13:26:09 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:15:52.122 13:26:09 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:15:52.122 13:26:09 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:15:52.122 13:26:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:52.122 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 13:26:09 -- json_config/json_config.sh@323 -- # killprocess 61132 00:15:52.122 13:26:09 -- common/autotest_common.sh@936 -- # '[' -z 61132 ']' 00:15:52.122 13:26:09 -- common/autotest_common.sh@940 -- # kill -0 61132 00:15:52.122 13:26:09 -- common/autotest_common.sh@941 -- # uname 00:15:52.122 13:26:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:52.122 13:26:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61132 00:15:52.122 13:26:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:52.122 killing process with pid 61132 00:15:52.122 13:26:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:52.122 13:26:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61132' 00:15:52.122 13:26:09 -- common/autotest_common.sh@955 -- # kill 61132 00:15:52.122 13:26:09 -- common/autotest_common.sh@960 -- # wait 61132 00:15:52.380 13:26:09 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:52.380 13:26:09 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:15:52.380 13:26:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:52.380 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:52.380 13:26:09 -- json_config/json_config.sh@328 -- # return 0 00:15:52.380 INFO: Success 00:15:52.380 13:26:09 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:15:52.380 00:15:52.380 real 0m8.545s 00:15:52.380 user 0m12.251s 00:15:52.380 sys 0m1.918s 00:15:52.380 13:26:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:52.380 ************************************ 00:15:52.380 END TEST json_config 00:15:52.380 ************************************ 00:15:52.380 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:52.380 13:26:09 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:52.380 13:26:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:52.380 13:26:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.380 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 ************************************ 00:15:52.640 START TEST json_config_extra_key 00:15:52.640 ************************************ 00:15:52.640 13:26:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.640 13:26:09 -- nvmf/common.sh@7 -- # uname -s 00:15:52.640 13:26:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.640 13:26:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.640 13:26:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.640 13:26:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.640 13:26:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.640 13:26:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.640 13:26:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.640 13:26:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.640 13:26:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.640 13:26:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.640 13:26:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:15:52.640 13:26:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:15:52.640 13:26:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.640 13:26:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.640 13:26:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:52.640 13:26:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.640 13:26:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.640 13:26:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.640 13:26:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.640 13:26:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.640 13:26:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.640 13:26:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.640 13:26:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.640 13:26:09 -- paths/export.sh@5 -- # export PATH 00:15:52.640 13:26:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.640 13:26:09 -- nvmf/common.sh@47 -- # : 0 00:15:52.640 13:26:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.640 13:26:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.640 13:26:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.640 13:26:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.640 13:26:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.640 13:26:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.640 13:26:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.640 13:26:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:52.640 INFO: launching applications... 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:52.640 13:26:09 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:52.640 13:26:09 -- json_config/common.sh@9 -- # local app=target 00:15:52.640 13:26:09 -- json_config/common.sh@10 -- # shift 00:15:52.640 13:26:09 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:52.640 13:26:09 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:52.640 13:26:09 -- json_config/common.sh@15 -- # local app_extra_params= 00:15:52.640 13:26:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:52.640 13:26:09 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:52.640 13:26:09 -- json_config/common.sh@22 -- # app_pid["$app"]=61308 00:15:52.640 Waiting for target to run... 00:15:52.640 13:26:09 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:52.640 13:26:09 -- json_config/common.sh@25 -- # waitforlisten 61308 /var/tmp/spdk_tgt.sock 00:15:52.640 13:26:09 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:52.640 13:26:09 -- common/autotest_common.sh@817 -- # '[' -z 61308 ']' 00:15:52.640 13:26:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:52.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:52.640 13:26:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.640 13:26:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:52.640 13:26:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.640 13:26:09 -- common/autotest_common.sh@10 -- # set +x 00:15:52.640 [2024-04-26 13:26:10.040190] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:52.640 [2024-04-26 13:26:10.040308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61308 ] 00:15:53.208 [2024-04-26 13:26:10.481466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.208 [2024-04-26 13:26:10.582170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.775 13:26:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.775 00:15:53.775 13:26:11 -- common/autotest_common.sh@850 -- # return 0 00:15:53.775 13:26:11 -- json_config/common.sh@26 -- # echo '' 00:15:53.775 INFO: shutting down applications... 00:15:53.775 13:26:11 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:53.775 13:26:11 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:53.775 13:26:11 -- json_config/common.sh@31 -- # local app=target 00:15:53.775 13:26:11 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:53.775 13:26:11 -- json_config/common.sh@35 -- # [[ -n 61308 ]] 00:15:53.775 13:26:11 -- json_config/common.sh@38 -- # kill -SIGINT 61308 00:15:53.775 13:26:11 -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:53.775 13:26:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:53.775 13:26:11 -- json_config/common.sh@41 -- # kill -0 61308 00:15:53.775 13:26:11 -- json_config/common.sh@45 -- # sleep 0.5 00:15:54.357 13:26:11 -- json_config/common.sh@40 -- # (( i++ )) 00:15:54.357 13:26:11 -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:54.357 13:26:11 -- json_config/common.sh@41 -- # kill -0 61308 00:15:54.357 13:26:11 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:54.357 13:26:11 -- json_config/common.sh@43 -- # break 00:15:54.357 13:26:11 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:54.357 SPDK target shutdown done 00:15:54.357 13:26:11 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:54.357 Success 00:15:54.357 13:26:11 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:54.357 00:15:54.357 real 0m1.626s 00:15:54.357 user 0m1.534s 00:15:54.357 sys 0m0.479s 00:15:54.357 13:26:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:54.357 13:26:11 -- common/autotest_common.sh@10 -- # set +x 00:15:54.357 ************************************ 00:15:54.357 END TEST json_config_extra_key 00:15:54.357 ************************************ 00:15:54.357 13:26:11 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:54.357 13:26:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:54.357 13:26:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:54.357 13:26:11 -- common/autotest_common.sh@10 -- # set +x 00:15:54.357 ************************************ 00:15:54.357 START TEST alias_rpc 00:15:54.357 ************************************ 00:15:54.357 13:26:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:54.357 * Looking for test storage... 00:15:54.357 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:54.357 13:26:11 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:54.357 13:26:11 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61396 00:15:54.357 13:26:11 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:54.357 13:26:11 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61396 00:15:54.357 13:26:11 -- common/autotest_common.sh@817 -- # '[' -z 61396 ']' 00:15:54.357 13:26:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.357 13:26:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:54.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.357 13:26:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.357 13:26:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:54.357 13:26:11 -- common/autotest_common.sh@10 -- # set +x 00:15:54.357 [2024-04-26 13:26:11.772987] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:54.357 [2024-04-26 13:26:11.773088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61396 ] 00:15:54.614 [2024-04-26 13:26:11.904226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.614 [2024-04-26 13:26:12.016766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.548 13:26:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:55.548 13:26:12 -- common/autotest_common.sh@850 -- # return 0 00:15:55.548 13:26:12 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:55.807 13:26:13 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61396 00:15:55.807 13:26:13 -- common/autotest_common.sh@936 -- # '[' -z 61396 ']' 00:15:55.807 13:26:13 -- common/autotest_common.sh@940 -- # kill -0 61396 00:15:55.807 13:26:13 -- common/autotest_common.sh@941 -- # uname 00:15:55.807 13:26:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.807 13:26:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61396 00:15:55.807 13:26:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.807 13:26:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.807 13:26:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61396' 00:15:55.807 killing process with pid 61396 00:15:55.807 13:26:13 -- common/autotest_common.sh@955 -- # kill 61396 00:15:55.807 13:26:13 -- common/autotest_common.sh@960 -- # wait 61396 00:15:56.064 00:15:56.064 real 0m1.872s 00:15:56.064 user 0m2.149s 00:15:56.064 sys 0m0.429s 00:15:56.064 13:26:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:56.064 13:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 ************************************ 00:15:56.064 END TEST alias_rpc 00:15:56.064 ************************************ 00:15:56.320 13:26:13 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:15:56.320 13:26:13 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:56.321 13:26:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:56.321 13:26:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:56.321 13:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:56.321 ************************************ 00:15:56.321 START TEST dpdk_mem_utility 00:15:56.321 ************************************ 00:15:56.321 13:26:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:56.321 * Looking for test storage... 00:15:56.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:56.321 13:26:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:56.321 13:26:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61493 00:15:56.321 13:26:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:56.321 13:26:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61493 00:15:56.321 13:26:13 -- common/autotest_common.sh@817 -- # '[' -z 61493 ']' 00:15:56.321 13:26:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.321 13:26:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:56.321 13:26:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.321 13:26:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:56.321 13:26:13 -- common/autotest_common.sh@10 -- # set +x 00:15:56.321 [2024-04-26 13:26:13.757406] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:56.321 [2024-04-26 13:26:13.757969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61493 ] 00:15:56.578 [2024-04-26 13:26:13.892258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.578 [2024-04-26 13:26:14.005593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.512 13:26:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:57.512 13:26:14 -- common/autotest_common.sh@850 -- # return 0 00:15:57.512 13:26:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:57.512 13:26:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:57.512 13:26:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:57.512 13:26:14 -- common/autotest_common.sh@10 -- # set +x 00:15:57.512 { 00:15:57.513 "filename": "/tmp/spdk_mem_dump.txt" 00:15:57.513 } 00:15:57.513 13:26:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:57.513 13:26:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:57.513 DPDK memory size 814.000000 MiB in 1 heap(s) 00:15:57.513 1 heaps totaling size 814.000000 MiB 00:15:57.513 size: 814.000000 MiB heap id: 0 00:15:57.513 end heaps---------- 00:15:57.513 8 mempools totaling size 598.116089 MiB 00:15:57.513 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:57.513 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:57.513 size: 84.521057 MiB name: bdev_io_61493 00:15:57.513 size: 51.011292 MiB name: evtpool_61493 00:15:57.513 size: 50.003479 MiB name: msgpool_61493 00:15:57.513 size: 21.763794 MiB name: PDU_Pool 00:15:57.513 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:57.513 size: 0.026123 MiB name: Session_Pool 00:15:57.513 end mempools------- 00:15:57.513 6 memzones totaling size 4.142822 MiB 00:15:57.513 size: 1.000366 MiB name: RG_ring_0_61493 00:15:57.513 size: 1.000366 MiB name: RG_ring_1_61493 00:15:57.513 size: 1.000366 MiB name: RG_ring_4_61493 00:15:57.513 size: 1.000366 MiB name: RG_ring_5_61493 00:15:57.513 size: 0.125366 MiB name: RG_ring_2_61493 00:15:57.513 size: 0.015991 MiB name: RG_ring_3_61493 00:15:57.513 end memzones------- 00:15:57.513 13:26:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:57.513 heap id: 0 total size: 814.000000 MiB number of busy elements: 218 number of free elements: 15 00:15:57.513 list of free elements. size: 12.486938 MiB 00:15:57.513 element at address: 0x200000400000 with size: 1.999512 MiB 00:15:57.513 element at address: 0x200018e00000 with size: 0.999878 MiB 00:15:57.513 element at address: 0x200019000000 with size: 0.999878 MiB 00:15:57.513 element at address: 0x200003e00000 with size: 0.996277 MiB 00:15:57.513 element at address: 0x200031c00000 with size: 0.994446 MiB 00:15:57.513 element at address: 0x200013800000 with size: 0.978699 MiB 00:15:57.513 element at address: 0x200007000000 with size: 0.959839 MiB 00:15:57.513 element at address: 0x200019200000 with size: 0.936584 MiB 00:15:57.513 element at address: 0x200000200000 with size: 0.837036 MiB 00:15:57.513 element at address: 0x20001aa00000 with size: 0.572083 MiB 00:15:57.513 element at address: 0x20000b200000 with size: 0.489990 MiB 00:15:57.513 element at address: 0x200000800000 with size: 0.487061 MiB 00:15:57.513 element at address: 0x200019400000 with size: 0.485657 MiB 00:15:57.513 element at address: 0x200027e00000 with size: 0.398315 MiB 00:15:57.513 element at address: 0x200003a00000 with size: 0.351685 MiB 00:15:57.513 list of standard malloc elements. size: 199.250488 MiB 00:15:57.513 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:15:57.513 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:15:57.513 element at address: 0x200018efff80 with size: 1.000122 MiB 00:15:57.513 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:15:57.513 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:15:57.513 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:15:57.513 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:15:57.513 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:15:57.513 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:15:57.513 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003adb300 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003adb500 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003affa80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003affb40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:15:57.513 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:15:57.514 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e66040 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:15:57.514 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:15:57.514 list of memzone associated elements. size: 602.262573 MiB 00:15:57.514 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:15:57.514 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:57.514 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:15:57.514 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:57.514 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:15:57.514 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61493_0 00:15:57.514 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:15:57.514 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61493_0 00:15:57.514 element at address: 0x200003fff380 with size: 48.003052 MiB 00:15:57.514 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61493_0 00:15:57.514 element at address: 0x2000195be940 with size: 20.255554 MiB 00:15:57.514 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:57.514 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:15:57.514 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:57.514 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:15:57.514 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61493 00:15:57.514 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:15:57.514 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61493 00:15:57.514 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:15:57.514 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61493 00:15:57.514 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:15:57.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:57.514 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:15:57.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:57.514 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:15:57.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:57.514 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:15:57.514 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:57.514 element at address: 0x200003eff180 with size: 1.000488 MiB 00:15:57.514 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61493 00:15:57.514 element at address: 0x200003affc00 with size: 1.000488 MiB 00:15:57.514 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61493 00:15:57.514 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:15:57.514 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61493 00:15:57.514 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:15:57.515 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61493 00:15:57.515 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:15:57.515 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61493 00:15:57.515 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:15:57.515 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:57.515 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:15:57.515 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:57.515 element at address: 0x20001947c540 with size: 0.250488 MiB 00:15:57.515 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:57.515 element at address: 0x200003adf880 with size: 0.125488 MiB 00:15:57.515 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61493 00:15:57.515 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:15:57.515 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:57.515 element at address: 0x200027e66100 with size: 0.023743 MiB 00:15:57.515 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:57.515 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:15:57.515 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61493 00:15:57.515 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:15:57.515 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:57.515 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:15:57.515 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61493 00:15:57.515 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:15:57.515 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61493 00:15:57.515 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:15:57.515 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:57.515 13:26:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:57.515 13:26:14 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61493 00:15:57.515 13:26:14 -- common/autotest_common.sh@936 -- # '[' -z 61493 ']' 00:15:57.515 13:26:14 -- common/autotest_common.sh@940 -- # kill -0 61493 00:15:57.515 13:26:14 -- common/autotest_common.sh@941 -- # uname 00:15:57.515 13:26:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:57.515 13:26:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61493 00:15:57.515 13:26:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:57.515 killing process with pid 61493 00:15:57.515 13:26:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:57.515 13:26:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61493' 00:15:57.515 13:26:14 -- common/autotest_common.sh@955 -- # kill 61493 00:15:57.515 13:26:14 -- common/autotest_common.sh@960 -- # wait 61493 00:15:58.082 00:15:58.082 real 0m1.668s 00:15:58.082 user 0m1.750s 00:15:58.082 sys 0m0.438s 00:15:58.082 13:26:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:58.082 13:26:15 -- common/autotest_common.sh@10 -- # set +x 00:15:58.082 ************************************ 00:15:58.082 END TEST dpdk_mem_utility 00:15:58.082 ************************************ 00:15:58.082 13:26:15 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:58.082 13:26:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:58.082 13:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.082 13:26:15 -- common/autotest_common.sh@10 -- # set +x 00:15:58.082 ************************************ 00:15:58.082 START TEST event 00:15:58.082 ************************************ 00:15:58.082 13:26:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:58.082 * Looking for test storage... 00:15:58.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:58.082 13:26:15 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:58.082 13:26:15 -- bdev/nbd_common.sh@6 -- # set -e 00:15:58.082 13:26:15 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:58.082 13:26:15 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:15:58.082 13:26:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:58.082 13:26:15 -- common/autotest_common.sh@10 -- # set +x 00:15:58.340 ************************************ 00:15:58.340 START TEST event_perf 00:15:58.340 ************************************ 00:15:58.340 13:26:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:58.340 Running I/O for 1 seconds...[2024-04-26 13:26:15.585726] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:58.340 [2024-04-26 13:26:15.585838] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:15:58.340 [2024-04-26 13:26:15.721965] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.599 [2024-04-26 13:26:15.839166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.599 [2024-04-26 13:26:15.839307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.599 Running I/O for 1 seconds...[2024-04-26 13:26:15.839450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.599 [2024-04-26 13:26:15.839546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.547 00:15:59.547 lcore 0: 199290 00:15:59.547 lcore 1: 199290 00:15:59.547 lcore 2: 199289 00:15:59.547 lcore 3: 199289 00:15:59.547 done. 00:15:59.547 00:15:59.547 real 0m1.398s 00:15:59.547 user 0m4.212s 00:15:59.547 sys 0m0.066s 00:15:59.547 13:26:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:59.547 13:26:16 -- common/autotest_common.sh@10 -- # set +x 00:15:59.547 ************************************ 00:15:59.547 END TEST event_perf 00:15:59.547 ************************************ 00:15:59.806 13:26:17 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:59.806 13:26:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:59.806 13:26:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:59.806 13:26:17 -- common/autotest_common.sh@10 -- # set +x 00:15:59.806 ************************************ 00:15:59.806 START TEST event_reactor 00:15:59.806 ************************************ 00:15:59.806 13:26:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:59.806 [2024-04-26 13:26:17.108549] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:15:59.806 [2024-04-26 13:26:17.108698] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61635 ] 00:15:59.806 [2024-04-26 13:26:17.247557] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.065 [2024-04-26 13:26:17.388131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.442 test_start 00:16:01.442 oneshot 00:16:01.442 tick 100 00:16:01.442 tick 100 00:16:01.442 tick 250 00:16:01.442 tick 100 00:16:01.442 tick 100 00:16:01.442 tick 100 00:16:01.442 tick 250 00:16:01.442 tick 500 00:16:01.442 tick 100 00:16:01.442 tick 100 00:16:01.442 tick 250 00:16:01.442 tick 100 00:16:01.442 tick 100 00:16:01.442 test_end 00:16:01.442 00:16:01.442 real 0m1.423s 00:16:01.442 user 0m1.252s 00:16:01.442 sys 0m0.063s 00:16:01.442 13:26:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:01.442 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:01.442 ************************************ 00:16:01.442 END TEST event_reactor 00:16:01.442 ************************************ 00:16:01.442 13:26:18 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:01.442 13:26:18 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:01.442 13:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.442 13:26:18 -- common/autotest_common.sh@10 -- # set +x 00:16:01.442 ************************************ 00:16:01.442 START TEST event_reactor_perf 00:16:01.442 ************************************ 00:16:01.442 13:26:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:16:01.442 [2024-04-26 13:26:18.653669] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:01.442 [2024-04-26 13:26:18.653767] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61675 ] 00:16:01.442 [2024-04-26 13:26:18.793551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.702 [2024-04-26 13:26:18.902497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.638 test_start 00:16:02.638 test_end 00:16:02.638 Performance: 367461 events per second 00:16:02.638 00:16:02.638 real 0m1.385s 00:16:02.638 user 0m1.212s 00:16:02.638 sys 0m0.066s 00:16:02.638 13:26:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:02.638 ************************************ 00:16:02.638 END TEST event_reactor_perf 00:16:02.638 ************************************ 00:16:02.638 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:16:02.638 13:26:20 -- event/event.sh@49 -- # uname -s 00:16:02.638 13:26:20 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:16:02.638 13:26:20 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:02.638 13:26:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:02.638 13:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:02.638 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:16:02.897 ************************************ 00:16:02.897 START TEST event_scheduler 00:16:02.897 ************************************ 00:16:02.897 13:26:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:16:02.897 * Looking for test storage... 00:16:02.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:16:02.897 13:26:20 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:16:02.897 13:26:20 -- scheduler/scheduler.sh@35 -- # scheduler_pid=61741 00:16:02.897 13:26:20 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:16:02.897 13:26:20 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:16:02.897 13:26:20 -- scheduler/scheduler.sh@37 -- # waitforlisten 61741 00:16:02.897 13:26:20 -- common/autotest_common.sh@817 -- # '[' -z 61741 ']' 00:16:02.897 13:26:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.897 13:26:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:02.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.897 13:26:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.897 13:26:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:02.897 13:26:20 -- common/autotest_common.sh@10 -- # set +x 00:16:02.897 [2024-04-26 13:26:20.289982] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:02.897 [2024-04-26 13:26:20.290113] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61741 ] 00:16:03.156 [2024-04-26 13:26:20.429016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.156 [2024-04-26 13:26:20.547420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.156 [2024-04-26 13:26:20.547532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.156 [2024-04-26 13:26:20.547661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.156 [2024-04-26 13:26:20.547664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.094 13:26:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:04.094 13:26:21 -- common/autotest_common.sh@850 -- # return 0 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 POWER: Env isn't set yet! 00:16:04.095 POWER: Attempting to initialise ACPI cpufreq power management... 00:16:04.095 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:04.095 POWER: Cannot set governor of lcore 0 to userspace 00:16:04.095 POWER: Attempting to initialise PSTAT power management... 00:16:04.095 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:04.095 POWER: Cannot set governor of lcore 0 to performance 00:16:04.095 POWER: Attempting to initialise AMD PSTATE power management... 00:16:04.095 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:04.095 POWER: Cannot set governor of lcore 0 to userspace 00:16:04.095 POWER: Attempting to initialise CPPC power management... 00:16:04.095 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:16:04.095 POWER: Cannot set governor of lcore 0 to userspace 00:16:04.095 POWER: Attempting to initialise VM power management... 00:16:04.095 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:16:04.095 POWER: Unable to set Power Management Environment for lcore 0 00:16:04.095 [2024-04-26 13:26:21.277453] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:16:04.095 [2024-04-26 13:26:21.277468] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:16:04.095 [2024-04-26 13:26:21.277476] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 [2024-04-26 13:26:21.376643] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:16:04.095 13:26:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:04.095 13:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 ************************************ 00:16:04.095 START TEST scheduler_create_thread 00:16:04.095 ************************************ 00:16:04.095 13:26:21 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 2 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 3 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 4 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 5 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 6 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 7 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 8 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 9 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 10 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.095 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:16:04.095 13:26:21 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:16:04.095 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.095 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:04.364 13:26:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:04.364 13:26:21 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:16:04.364 13:26:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:04.364 13:26:21 -- common/autotest_common.sh@10 -- # set +x 00:16:05.741 13:26:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:05.741 13:26:23 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:16:05.741 13:26:23 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:16:05.741 13:26:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:05.741 13:26:23 -- common/autotest_common.sh@10 -- # set +x 00:16:06.700 13:26:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.700 00:16:06.700 real 0m2.611s 00:16:06.700 user 0m0.016s 00:16:06.700 sys 0m0.007s 00:16:06.700 13:26:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:06.700 ************************************ 00:16:06.700 END TEST scheduler_create_thread 00:16:06.700 ************************************ 00:16:06.700 13:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:06.700 13:26:24 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:06.700 13:26:24 -- scheduler/scheduler.sh@46 -- # killprocess 61741 00:16:06.700 13:26:24 -- common/autotest_common.sh@936 -- # '[' -z 61741 ']' 00:16:06.700 13:26:24 -- common/autotest_common.sh@940 -- # kill -0 61741 00:16:06.700 13:26:24 -- common/autotest_common.sh@941 -- # uname 00:16:06.700 13:26:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:06.700 13:26:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61741 00:16:06.700 killing process with pid 61741 00:16:06.700 13:26:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:06.700 13:26:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:06.700 13:26:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61741' 00:16:06.700 13:26:24 -- common/autotest_common.sh@955 -- # kill 61741 00:16:06.700 13:26:24 -- common/autotest_common.sh@960 -- # wait 61741 00:16:07.266 [2024-04-26 13:26:24.541212] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:16:07.524 ************************************ 00:16:07.524 END TEST event_scheduler 00:16:07.524 ************************************ 00:16:07.524 00:16:07.524 real 0m4.663s 00:16:07.524 user 0m8.761s 00:16:07.524 sys 0m0.408s 00:16:07.524 13:26:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:07.524 13:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:07.524 13:26:24 -- event/event.sh@51 -- # modprobe -n nbd 00:16:07.524 13:26:24 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:16:07.524 13:26:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:07.524 13:26:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:07.524 13:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:07.524 ************************************ 00:16:07.524 START TEST app_repeat 00:16:07.524 ************************************ 00:16:07.524 13:26:24 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:16:07.524 13:26:24 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.524 13:26:24 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:07.524 13:26:24 -- event/event.sh@13 -- # local nbd_list 00:16:07.524 13:26:24 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:07.524 13:26:24 -- event/event.sh@14 -- # local bdev_list 00:16:07.524 13:26:24 -- event/event.sh@15 -- # local repeat_times=4 00:16:07.524 13:26:24 -- event/event.sh@17 -- # modprobe nbd 00:16:07.524 13:26:24 -- event/event.sh@19 -- # repeat_pid=61867 00:16:07.524 13:26:24 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:16:07.525 13:26:24 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:16:07.525 Process app_repeat pid: 61867 00:16:07.525 spdk_app_start Round 0 00:16:07.525 13:26:24 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61867' 00:16:07.525 13:26:24 -- event/event.sh@23 -- # for i in {0..2} 00:16:07.525 13:26:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:16:07.525 13:26:24 -- event/event.sh@25 -- # waitforlisten 61867 /var/tmp/spdk-nbd.sock 00:16:07.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:07.525 13:26:24 -- common/autotest_common.sh@817 -- # '[' -z 61867 ']' 00:16:07.525 13:26:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:07.525 13:26:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:07.525 13:26:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:07.525 13:26:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:07.525 13:26:24 -- common/autotest_common.sh@10 -- # set +x 00:16:07.525 [2024-04-26 13:26:24.950818] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:07.525 [2024-04-26 13:26:24.950901] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61867 ] 00:16:07.783 [2024-04-26 13:26:25.089197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.783 [2024-04-26 13:26:25.216769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.783 [2024-04-26 13:26:25.216790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.717 13:26:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:08.717 13:26:25 -- common/autotest_common.sh@850 -- # return 0 00:16:08.717 13:26:25 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:08.975 Malloc0 00:16:08.975 13:26:26 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:09.233 Malloc1 00:16:09.233 13:26:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@12 -- # local i 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.233 13:26:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:09.491 /dev/nbd0 00:16:09.491 13:26:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.491 13:26:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.491 13:26:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:09.491 13:26:26 -- common/autotest_common.sh@855 -- # local i 00:16:09.491 13:26:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:09.491 13:26:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:09.491 13:26:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:09.491 13:26:26 -- common/autotest_common.sh@859 -- # break 00:16:09.491 13:26:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:09.491 13:26:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:09.491 13:26:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:09.491 1+0 records in 00:16:09.491 1+0 records out 00:16:09.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330953 s, 12.4 MB/s 00:16:09.491 13:26:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:09.491 13:26:26 -- common/autotest_common.sh@872 -- # size=4096 00:16:09.491 13:26:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:09.491 13:26:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:09.491 13:26:26 -- common/autotest_common.sh@875 -- # return 0 00:16:09.491 13:26:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.491 13:26:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.491 13:26:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:09.749 /dev/nbd1 00:16:09.749 13:26:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:09.749 13:26:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:09.749 13:26:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:09.749 13:26:27 -- common/autotest_common.sh@855 -- # local i 00:16:09.749 13:26:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:09.749 13:26:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:09.749 13:26:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:09.749 13:26:27 -- common/autotest_common.sh@859 -- # break 00:16:09.749 13:26:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:09.749 13:26:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:09.749 13:26:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:09.749 1+0 records in 00:16:09.749 1+0 records out 00:16:09.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235744 s, 17.4 MB/s 00:16:09.749 13:26:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:09.749 13:26:27 -- common/autotest_common.sh@872 -- # size=4096 00:16:09.749 13:26:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:09.749 13:26:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:09.749 13:26:27 -- common/autotest_common.sh@875 -- # return 0 00:16:09.749 13:26:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.749 13:26:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:09.749 13:26:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:09.749 13:26:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:09.749 13:26:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:10.007 13:26:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:10.007 { 00:16:10.007 "bdev_name": "Malloc0", 00:16:10.007 "nbd_device": "/dev/nbd0" 00:16:10.007 }, 00:16:10.007 { 00:16:10.007 "bdev_name": "Malloc1", 00:16:10.007 "nbd_device": "/dev/nbd1" 00:16:10.007 } 00:16:10.007 ]' 00:16:10.007 13:26:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:10.007 { 00:16:10.007 "bdev_name": "Malloc0", 00:16:10.007 "nbd_device": "/dev/nbd0" 00:16:10.007 }, 00:16:10.007 { 00:16:10.007 "bdev_name": "Malloc1", 00:16:10.007 "nbd_device": "/dev/nbd1" 00:16:10.007 } 00:16:10.007 ]' 00:16:10.007 13:26:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:10.265 /dev/nbd1' 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:10.265 /dev/nbd1' 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@65 -- # count=2 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@95 -- # count=2 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:10.265 256+0 records in 00:16:10.265 256+0 records out 00:16:10.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00758334 s, 138 MB/s 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:10.265 256+0 records in 00:16:10.265 256+0 records out 00:16:10.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024829 s, 42.2 MB/s 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:10.265 256+0 records in 00:16:10.265 256+0 records out 00:16:10.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273662 s, 38.3 MB/s 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:10.265 13:26:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@51 -- # local i 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.266 13:26:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@41 -- # break 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.524 13:26:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@41 -- # break 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:10.782 13:26:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@65 -- # true 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@65 -- # count=0 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@104 -- # count=0 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:11.040 13:26:28 -- bdev/nbd_common.sh@109 -- # return 0 00:16:11.040 13:26:28 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:11.351 13:26:28 -- event/event.sh@35 -- # sleep 3 00:16:11.610 [2024-04-26 13:26:28.940314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:11.610 [2024-04-26 13:26:29.031410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.610 [2024-04-26 13:26:29.031419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.868 [2024-04-26 13:26:29.088366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:11.868 [2024-04-26 13:26:29.088437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:14.400 13:26:31 -- event/event.sh@23 -- # for i in {0..2} 00:16:14.400 spdk_app_start Round 1 00:16:14.400 13:26:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:16:14.400 13:26:31 -- event/event.sh@25 -- # waitforlisten 61867 /var/tmp/spdk-nbd.sock 00:16:14.400 13:26:31 -- common/autotest_common.sh@817 -- # '[' -z 61867 ']' 00:16:14.400 13:26:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:14.400 13:26:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:14.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:14.400 13:26:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:14.400 13:26:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:14.400 13:26:31 -- common/autotest_common.sh@10 -- # set +x 00:16:14.704 13:26:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:14.704 13:26:31 -- common/autotest_common.sh@850 -- # return 0 00:16:14.704 13:26:31 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:14.964 Malloc0 00:16:14.964 13:26:32 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:15.223 Malloc1 00:16:15.223 13:26:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@12 -- # local i 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.223 13:26:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:15.480 /dev/nbd0 00:16:15.480 13:26:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.480 13:26:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.480 13:26:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:15.480 13:26:32 -- common/autotest_common.sh@855 -- # local i 00:16:15.480 13:26:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:15.480 13:26:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:15.480 13:26:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:15.480 13:26:32 -- common/autotest_common.sh@859 -- # break 00:16:15.480 13:26:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:15.480 13:26:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:15.480 13:26:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:15.480 1+0 records in 00:16:15.480 1+0 records out 00:16:15.480 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305047 s, 13.4 MB/s 00:16:15.480 13:26:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:15.480 13:26:32 -- common/autotest_common.sh@872 -- # size=4096 00:16:15.480 13:26:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:15.480 13:26:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:15.480 13:26:32 -- common/autotest_common.sh@875 -- # return 0 00:16:15.480 13:26:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.480 13:26:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.480 13:26:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:15.738 /dev/nbd1 00:16:15.738 13:26:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:15.996 13:26:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:15.996 13:26:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:15.996 13:26:33 -- common/autotest_common.sh@855 -- # local i 00:16:15.996 13:26:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:15.997 13:26:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:15.997 13:26:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:15.997 13:26:33 -- common/autotest_common.sh@859 -- # break 00:16:15.997 13:26:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:15.997 13:26:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:15.997 13:26:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:15.997 1+0 records in 00:16:15.997 1+0 records out 00:16:15.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344703 s, 11.9 MB/s 00:16:15.997 13:26:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:15.997 13:26:33 -- common/autotest_common.sh@872 -- # size=4096 00:16:15.997 13:26:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:15.997 13:26:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:15.997 13:26:33 -- common/autotest_common.sh@875 -- # return 0 00:16:15.997 13:26:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.997 13:26:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.997 13:26:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:15.997 13:26:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:15.997 13:26:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:16.254 { 00:16:16.254 "bdev_name": "Malloc0", 00:16:16.254 "nbd_device": "/dev/nbd0" 00:16:16.254 }, 00:16:16.254 { 00:16:16.254 "bdev_name": "Malloc1", 00:16:16.254 "nbd_device": "/dev/nbd1" 00:16:16.254 } 00:16:16.254 ]' 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:16.254 { 00:16:16.254 "bdev_name": "Malloc0", 00:16:16.254 "nbd_device": "/dev/nbd0" 00:16:16.254 }, 00:16:16.254 { 00:16:16.254 "bdev_name": "Malloc1", 00:16:16.254 "nbd_device": "/dev/nbd1" 00:16:16.254 } 00:16:16.254 ]' 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:16.254 /dev/nbd1' 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:16.254 /dev/nbd1' 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@65 -- # count=2 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@95 -- # count=2 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:16.254 256+0 records in 00:16:16.254 256+0 records out 00:16:16.254 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00890013 s, 118 MB/s 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:16.254 13:26:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:16.255 256+0 records in 00:16:16.255 256+0 records out 00:16:16.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244816 s, 42.8 MB/s 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:16.255 256+0 records in 00:16:16.255 256+0 records out 00:16:16.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301447 s, 34.8 MB/s 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@51 -- # local i 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.255 13:26:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@41 -- # break 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.513 13:26:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@41 -- # break 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@45 -- # return 0 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:17.078 13:26:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@65 -- # true 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@65 -- # count=0 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@104 -- # count=0 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:17.337 13:26:34 -- bdev/nbd_common.sh@109 -- # return 0 00:16:17.337 13:26:34 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:17.596 13:26:34 -- event/event.sh@35 -- # sleep 3 00:16:17.853 [2024-04-26 13:26:35.120020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:17.853 [2024-04-26 13:26:35.225608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.853 [2024-04-26 13:26:35.225615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.853 [2024-04-26 13:26:35.281936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:17.853 [2024-04-26 13:26:35.282007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:21.139 13:26:37 -- event/event.sh@23 -- # for i in {0..2} 00:16:21.139 spdk_app_start Round 2 00:16:21.139 13:26:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:16:21.139 13:26:37 -- event/event.sh@25 -- # waitforlisten 61867 /var/tmp/spdk-nbd.sock 00:16:21.139 13:26:37 -- common/autotest_common.sh@817 -- # '[' -z 61867 ']' 00:16:21.139 13:26:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:21.139 13:26:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:21.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:21.139 13:26:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:21.139 13:26:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:21.139 13:26:37 -- common/autotest_common.sh@10 -- # set +x 00:16:21.139 13:26:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:21.139 13:26:38 -- common/autotest_common.sh@850 -- # return 0 00:16:21.139 13:26:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:21.139 Malloc0 00:16:21.139 13:26:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:16:21.398 Malloc1 00:16:21.398 13:26:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@12 -- # local i 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.398 13:26:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:16:21.658 /dev/nbd0 00:16:21.658 13:26:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.658 13:26:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.658 13:26:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:16:21.658 13:26:39 -- common/autotest_common.sh@855 -- # local i 00:16:21.658 13:26:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:21.658 13:26:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:21.658 13:26:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:16:21.658 13:26:39 -- common/autotest_common.sh@859 -- # break 00:16:21.658 13:26:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:21.658 13:26:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:21.658 13:26:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:21.658 1+0 records in 00:16:21.658 1+0 records out 00:16:21.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254423 s, 16.1 MB/s 00:16:21.658 13:26:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:21.658 13:26:39 -- common/autotest_common.sh@872 -- # size=4096 00:16:21.658 13:26:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:21.658 13:26:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:21.658 13:26:39 -- common/autotest_common.sh@875 -- # return 0 00:16:21.658 13:26:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.658 13:26:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.658 13:26:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:16:21.917 /dev/nbd1 00:16:21.917 13:26:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:21.917 13:26:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:21.917 13:26:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:16:21.917 13:26:39 -- common/autotest_common.sh@855 -- # local i 00:16:21.917 13:26:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:16:21.917 13:26:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:16:21.917 13:26:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:16:21.917 13:26:39 -- common/autotest_common.sh@859 -- # break 00:16:21.917 13:26:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:16:21.917 13:26:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:16:21.917 13:26:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:16:21.917 1+0 records in 00:16:21.917 1+0 records out 00:16:21.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342514 s, 12.0 MB/s 00:16:21.917 13:26:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:21.917 13:26:39 -- common/autotest_common.sh@872 -- # size=4096 00:16:21.917 13:26:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:16:21.917 13:26:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:16:21.917 13:26:39 -- common/autotest_common.sh@875 -- # return 0 00:16:21.917 13:26:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.917 13:26:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.917 13:26:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:21.917 13:26:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.917 13:26:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:22.176 13:26:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:22.176 { 00:16:22.176 "bdev_name": "Malloc0", 00:16:22.176 "nbd_device": "/dev/nbd0" 00:16:22.176 }, 00:16:22.176 { 00:16:22.176 "bdev_name": "Malloc1", 00:16:22.176 "nbd_device": "/dev/nbd1" 00:16:22.176 } 00:16:22.176 ]' 00:16:22.176 13:26:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:22.176 13:26:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:22.176 { 00:16:22.176 "bdev_name": "Malloc0", 00:16:22.176 "nbd_device": "/dev/nbd0" 00:16:22.176 }, 00:16:22.176 { 00:16:22.176 "bdev_name": "Malloc1", 00:16:22.176 "nbd_device": "/dev/nbd1" 00:16:22.176 } 00:16:22.176 ]' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:22.435 /dev/nbd1' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:22.435 /dev/nbd1' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@65 -- # count=2 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@95 -- # count=2 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:16:22.435 256+0 records in 00:16:22.435 256+0 records out 00:16:22.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100629 s, 104 MB/s 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:22.435 256+0 records in 00:16:22.435 256+0 records out 00:16:22.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233703 s, 44.9 MB/s 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:22.435 256+0 records in 00:16:22.435 256+0 records out 00:16:22.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291433 s, 36.0 MB/s 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@51 -- # local i 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.435 13:26:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@41 -- # break 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.694 13:26:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@41 -- # break 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:22.953 13:26:40 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@65 -- # true 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@65 -- # count=0 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@104 -- # count=0 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:23.211 13:26:40 -- bdev/nbd_common.sh@109 -- # return 0 00:16:23.211 13:26:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:16:23.470 13:26:40 -- event/event.sh@35 -- # sleep 3 00:16:23.729 [2024-04-26 13:26:41.090172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:23.987 [2024-04-26 13:26:41.207906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.987 [2024-04-26 13:26:41.207919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.987 [2024-04-26 13:26:41.263495] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:16:23.987 [2024-04-26 13:26:41.263597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:16:26.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:26.517 13:26:43 -- event/event.sh@38 -- # waitforlisten 61867 /var/tmp/spdk-nbd.sock 00:16:26.517 13:26:43 -- common/autotest_common.sh@817 -- # '[' -z 61867 ']' 00:16:26.517 13:26:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:26.517 13:26:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:26.517 13:26:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:26.517 13:26:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:26.517 13:26:43 -- common/autotest_common.sh@10 -- # set +x 00:16:26.776 13:26:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:26.776 13:26:44 -- common/autotest_common.sh@850 -- # return 0 00:16:26.776 13:26:44 -- event/event.sh@39 -- # killprocess 61867 00:16:26.776 13:26:44 -- common/autotest_common.sh@936 -- # '[' -z 61867 ']' 00:16:26.776 13:26:44 -- common/autotest_common.sh@940 -- # kill -0 61867 00:16:26.776 13:26:44 -- common/autotest_common.sh@941 -- # uname 00:16:26.776 13:26:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:26.776 13:26:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61867 00:16:26.776 killing process with pid 61867 00:16:26.776 13:26:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:26.776 13:26:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:26.776 13:26:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61867' 00:16:26.776 13:26:44 -- common/autotest_common.sh@955 -- # kill 61867 00:16:26.776 13:26:44 -- common/autotest_common.sh@960 -- # wait 61867 00:16:27.034 spdk_app_start is called in Round 0. 00:16:27.034 Shutdown signal received, stop current app iteration 00:16:27.034 Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 reinitialization... 00:16:27.034 spdk_app_start is called in Round 1. 00:16:27.034 Shutdown signal received, stop current app iteration 00:16:27.034 Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 reinitialization... 00:16:27.034 spdk_app_start is called in Round 2. 00:16:27.034 Shutdown signal received, stop current app iteration 00:16:27.034 Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 reinitialization... 00:16:27.034 spdk_app_start is called in Round 3. 00:16:27.034 Shutdown signal received, stop current app iteration 00:16:27.034 ************************************ 00:16:27.034 END TEST app_repeat 00:16:27.034 ************************************ 00:16:27.034 13:26:44 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:16:27.034 13:26:44 -- event/event.sh@42 -- # return 0 00:16:27.034 00:16:27.034 real 0m19.477s 00:16:27.034 user 0m43.435s 00:16:27.034 sys 0m3.302s 00:16:27.034 13:26:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:27.035 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:16:27.035 13:26:44 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:16:27.035 13:26:44 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:27.035 13:26:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:27.035 13:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.035 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:16:27.293 ************************************ 00:16:27.293 START TEST cpu_locks 00:16:27.293 ************************************ 00:16:27.293 13:26:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:16:27.293 * Looking for test storage... 00:16:27.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:16:27.293 13:26:44 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:16:27.293 13:26:44 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:16:27.294 13:26:44 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:16:27.294 13:26:44 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:16:27.294 13:26:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:27.294 13:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.294 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:16:27.294 ************************************ 00:16:27.294 START TEST default_locks 00:16:27.294 ************************************ 00:16:27.294 13:26:44 -- common/autotest_common.sh@1111 -- # default_locks 00:16:27.294 13:26:44 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62507 00:16:27.294 13:26:44 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:27.294 13:26:44 -- event/cpu_locks.sh@47 -- # waitforlisten 62507 00:16:27.294 13:26:44 -- common/autotest_common.sh@817 -- # '[' -z 62507 ']' 00:16:27.294 13:26:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.294 13:26:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:27.294 13:26:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.294 13:26:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:27.294 13:26:44 -- common/autotest_common.sh@10 -- # set +x 00:16:27.552 [2024-04-26 13:26:44.743530] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:27.552 [2024-04-26 13:26:44.743651] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62507 ] 00:16:27.552 [2024-04-26 13:26:44.881857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.822 [2024-04-26 13:26:45.005979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.394 13:26:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:28.394 13:26:45 -- common/autotest_common.sh@850 -- # return 0 00:16:28.394 13:26:45 -- event/cpu_locks.sh@49 -- # locks_exist 62507 00:16:28.394 13:26:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:28.394 13:26:45 -- event/cpu_locks.sh@22 -- # lslocks -p 62507 00:16:28.960 13:26:46 -- event/cpu_locks.sh@50 -- # killprocess 62507 00:16:28.961 13:26:46 -- common/autotest_common.sh@936 -- # '[' -z 62507 ']' 00:16:28.961 13:26:46 -- common/autotest_common.sh@940 -- # kill -0 62507 00:16:28.961 13:26:46 -- common/autotest_common.sh@941 -- # uname 00:16:28.961 13:26:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:28.961 13:26:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62507 00:16:28.961 killing process with pid 62507 00:16:28.961 13:26:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:28.961 13:26:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:28.961 13:26:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62507' 00:16:28.961 13:26:46 -- common/autotest_common.sh@955 -- # kill 62507 00:16:28.961 13:26:46 -- common/autotest_common.sh@960 -- # wait 62507 00:16:29.220 13:26:46 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62507 00:16:29.220 13:26:46 -- common/autotest_common.sh@638 -- # local es=0 00:16:29.220 13:26:46 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62507 00:16:29.220 13:26:46 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:29.220 13:26:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:29.220 13:26:46 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:29.220 13:26:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:29.220 13:26:46 -- common/autotest_common.sh@641 -- # waitforlisten 62507 00:16:29.220 13:26:46 -- common/autotest_common.sh@817 -- # '[' -z 62507 ']' 00:16:29.220 13:26:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.220 13:26:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:29.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.220 13:26:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.220 13:26:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:29.220 13:26:46 -- common/autotest_common.sh@10 -- # set +x 00:16:29.220 ERROR: process (pid: 62507) is no longer running 00:16:29.220 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62507) - No such process 00:16:29.220 13:26:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:29.220 13:26:46 -- common/autotest_common.sh@850 -- # return 1 00:16:29.220 13:26:46 -- common/autotest_common.sh@641 -- # es=1 00:16:29.220 13:26:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:29.220 13:26:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:29.220 13:26:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:29.220 13:26:46 -- event/cpu_locks.sh@54 -- # no_locks 00:16:29.220 13:26:46 -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:29.220 13:26:46 -- event/cpu_locks.sh@26 -- # local lock_files 00:16:29.220 13:26:46 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:29.220 00:16:29.220 real 0m1.959s 00:16:29.220 user 0m2.086s 00:16:29.220 sys 0m0.588s 00:16:29.220 13:26:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:29.220 ************************************ 00:16:29.220 END TEST default_locks 00:16:29.220 ************************************ 00:16:29.220 13:26:46 -- common/autotest_common.sh@10 -- # set +x 00:16:29.220 13:26:46 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:16:29.220 13:26:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:29.220 13:26:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:29.220 13:26:46 -- common/autotest_common.sh@10 -- # set +x 00:16:29.479 ************************************ 00:16:29.479 START TEST default_locks_via_rpc 00:16:29.479 ************************************ 00:16:29.479 13:26:46 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:16:29.479 13:26:46 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62576 00:16:29.479 13:26:46 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:29.479 13:26:46 -- event/cpu_locks.sh@63 -- # waitforlisten 62576 00:16:29.479 13:26:46 -- common/autotest_common.sh@817 -- # '[' -z 62576 ']' 00:16:29.479 13:26:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.479 13:26:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:29.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.479 13:26:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.479 13:26:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:29.479 13:26:46 -- common/autotest_common.sh@10 -- # set +x 00:16:29.479 [2024-04-26 13:26:46.803731] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:29.479 [2024-04-26 13:26:46.803874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62576 ] 00:16:29.738 [2024-04-26 13:26:46.938970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.738 [2024-04-26 13:26:47.060078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.673 13:26:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:30.673 13:26:47 -- common/autotest_common.sh@850 -- # return 0 00:16:30.673 13:26:47 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:16:30.673 13:26:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.673 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.673 13:26:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.673 13:26:47 -- event/cpu_locks.sh@67 -- # no_locks 00:16:30.673 13:26:47 -- event/cpu_locks.sh@26 -- # lock_files=() 00:16:30.673 13:26:47 -- event/cpu_locks.sh@26 -- # local lock_files 00:16:30.673 13:26:47 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:16:30.673 13:26:47 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:16:30.673 13:26:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.673 13:26:47 -- common/autotest_common.sh@10 -- # set +x 00:16:30.673 13:26:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.673 13:26:47 -- event/cpu_locks.sh@71 -- # locks_exist 62576 00:16:30.673 13:26:47 -- event/cpu_locks.sh@22 -- # lslocks -p 62576 00:16:30.673 13:26:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:30.933 13:26:48 -- event/cpu_locks.sh@73 -- # killprocess 62576 00:16:30.933 13:26:48 -- common/autotest_common.sh@936 -- # '[' -z 62576 ']' 00:16:30.933 13:26:48 -- common/autotest_common.sh@940 -- # kill -0 62576 00:16:30.933 13:26:48 -- common/autotest_common.sh@941 -- # uname 00:16:30.933 13:26:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.933 13:26:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62576 00:16:30.933 13:26:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:30.933 killing process with pid 62576 00:16:30.933 13:26:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:30.933 13:26:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62576' 00:16:30.933 13:26:48 -- common/autotest_common.sh@955 -- # kill 62576 00:16:30.933 13:26:48 -- common/autotest_common.sh@960 -- # wait 62576 00:16:31.499 00:16:31.499 real 0m1.908s 00:16:31.499 user 0m2.020s 00:16:31.499 sys 0m0.592s 00:16:31.499 13:26:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.499 13:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 ************************************ 00:16:31.499 END TEST default_locks_via_rpc 00:16:31.499 ************************************ 00:16:31.499 13:26:48 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:16:31.499 13:26:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:31.499 13:26:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.499 13:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 ************************************ 00:16:31.499 START TEST non_locking_app_on_locked_coremask 00:16:31.499 ************************************ 00:16:31.499 13:26:48 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:16:31.499 13:26:48 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62649 00:16:31.499 13:26:48 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:31.499 13:26:48 -- event/cpu_locks.sh@81 -- # waitforlisten 62649 /var/tmp/spdk.sock 00:16:31.499 13:26:48 -- common/autotest_common.sh@817 -- # '[' -z 62649 ']' 00:16:31.499 13:26:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.499 13:26:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:31.499 13:26:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.499 13:26:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:31.499 13:26:48 -- common/autotest_common.sh@10 -- # set +x 00:16:31.499 [2024-04-26 13:26:48.833257] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:31.499 [2024-04-26 13:26:48.833394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62649 ] 00:16:31.833 [2024-04-26 13:26:48.972145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.833 [2024-04-26 13:26:49.085623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.766 13:26:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:32.766 13:26:49 -- common/autotest_common.sh@850 -- # return 0 00:16:32.767 13:26:49 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:16:32.767 13:26:49 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62677 00:16:32.767 13:26:49 -- event/cpu_locks.sh@85 -- # waitforlisten 62677 /var/tmp/spdk2.sock 00:16:32.767 13:26:49 -- common/autotest_common.sh@817 -- # '[' -z 62677 ']' 00:16:32.767 13:26:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:32.767 13:26:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:32.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:32.767 13:26:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:32.767 13:26:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:32.767 13:26:49 -- common/autotest_common.sh@10 -- # set +x 00:16:32.767 [2024-04-26 13:26:49.905948] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:32.767 [2024-04-26 13:26:49.906048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62677 ] 00:16:32.767 [2024-04-26 13:26:50.049310] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:32.767 [2024-04-26 13:26:50.049371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.025 [2024-04-26 13:26:50.299843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.591 13:26:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:33.591 13:26:50 -- common/autotest_common.sh@850 -- # return 0 00:16:33.591 13:26:50 -- event/cpu_locks.sh@87 -- # locks_exist 62649 00:16:33.591 13:26:50 -- event/cpu_locks.sh@22 -- # lslocks -p 62649 00:16:33.591 13:26:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:34.555 13:26:51 -- event/cpu_locks.sh@89 -- # killprocess 62649 00:16:34.555 13:26:51 -- common/autotest_common.sh@936 -- # '[' -z 62649 ']' 00:16:34.555 13:26:51 -- common/autotest_common.sh@940 -- # kill -0 62649 00:16:34.555 13:26:51 -- common/autotest_common.sh@941 -- # uname 00:16:34.555 13:26:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.555 13:26:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62649 00:16:34.555 killing process with pid 62649 00:16:34.555 13:26:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.555 13:26:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.555 13:26:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62649' 00:16:34.555 13:26:51 -- common/autotest_common.sh@955 -- # kill 62649 00:16:34.555 13:26:51 -- common/autotest_common.sh@960 -- # wait 62649 00:16:35.489 13:26:52 -- event/cpu_locks.sh@90 -- # killprocess 62677 00:16:35.489 13:26:52 -- common/autotest_common.sh@936 -- # '[' -z 62677 ']' 00:16:35.489 13:26:52 -- common/autotest_common.sh@940 -- # kill -0 62677 00:16:35.489 13:26:52 -- common/autotest_common.sh@941 -- # uname 00:16:35.489 13:26:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.489 13:26:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62677 00:16:35.489 13:26:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.489 killing process with pid 62677 00:16:35.489 13:26:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.489 13:26:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62677' 00:16:35.489 13:26:52 -- common/autotest_common.sh@955 -- # kill 62677 00:16:35.489 13:26:52 -- common/autotest_common.sh@960 -- # wait 62677 00:16:35.747 00:16:35.747 real 0m4.387s 00:16:35.747 user 0m4.898s 00:16:35.747 sys 0m1.182s 00:16:35.747 13:26:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.747 13:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:35.747 ************************************ 00:16:35.747 END TEST non_locking_app_on_locked_coremask 00:16:35.747 ************************************ 00:16:36.032 13:26:53 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:16:36.032 13:26:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:36.032 13:26:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.032 13:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:36.032 ************************************ 00:16:36.032 START TEST locking_app_on_unlocked_coremask 00:16:36.032 ************************************ 00:16:36.032 13:26:53 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:16:36.032 13:26:53 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62767 00:16:36.032 13:26:53 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:16:36.032 13:26:53 -- event/cpu_locks.sh@99 -- # waitforlisten 62767 /var/tmp/spdk.sock 00:16:36.032 13:26:53 -- common/autotest_common.sh@817 -- # '[' -z 62767 ']' 00:16:36.032 13:26:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.032 13:26:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:36.032 13:26:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.032 13:26:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:36.032 13:26:53 -- common/autotest_common.sh@10 -- # set +x 00:16:36.032 [2024-04-26 13:26:53.339860] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:36.032 [2024-04-26 13:26:53.339955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62767 ] 00:16:36.032 [2024-04-26 13:26:53.472529] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:36.032 [2024-04-26 13:26:53.472603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.291 [2024-04-26 13:26:53.591805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.243 13:26:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:37.243 13:26:54 -- common/autotest_common.sh@850 -- # return 0 00:16:37.243 13:26:54 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:37.243 13:26:54 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62800 00:16:37.243 13:26:54 -- event/cpu_locks.sh@103 -- # waitforlisten 62800 /var/tmp/spdk2.sock 00:16:37.243 13:26:54 -- common/autotest_common.sh@817 -- # '[' -z 62800 ']' 00:16:37.243 13:26:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:37.243 13:26:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:37.243 13:26:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:37.243 13:26:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.243 13:26:54 -- common/autotest_common.sh@10 -- # set +x 00:16:37.243 [2024-04-26 13:26:54.445063] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:37.243 [2024-04-26 13:26:54.445151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62800 ] 00:16:37.243 [2024-04-26 13:26:54.584461] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.501 [2024-04-26 13:26:54.813728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.069 13:26:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:38.069 13:26:55 -- common/autotest_common.sh@850 -- # return 0 00:16:38.069 13:26:55 -- event/cpu_locks.sh@105 -- # locks_exist 62800 00:16:38.069 13:26:55 -- event/cpu_locks.sh@22 -- # lslocks -p 62800 00:16:38.069 13:26:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:39.007 13:26:56 -- event/cpu_locks.sh@107 -- # killprocess 62767 00:16:39.007 13:26:56 -- common/autotest_common.sh@936 -- # '[' -z 62767 ']' 00:16:39.007 13:26:56 -- common/autotest_common.sh@940 -- # kill -0 62767 00:16:39.007 13:26:56 -- common/autotest_common.sh@941 -- # uname 00:16:39.007 13:26:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:39.007 13:26:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62767 00:16:39.007 13:26:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:39.007 13:26:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:39.008 13:26:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62767' 00:16:39.008 killing process with pid 62767 00:16:39.008 13:26:56 -- common/autotest_common.sh@955 -- # kill 62767 00:16:39.008 13:26:56 -- common/autotest_common.sh@960 -- # wait 62767 00:16:40.011 13:26:57 -- event/cpu_locks.sh@108 -- # killprocess 62800 00:16:40.011 13:26:57 -- common/autotest_common.sh@936 -- # '[' -z 62800 ']' 00:16:40.011 13:26:57 -- common/autotest_common.sh@940 -- # kill -0 62800 00:16:40.011 13:26:57 -- common/autotest_common.sh@941 -- # uname 00:16:40.011 13:26:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:40.011 13:26:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62800 00:16:40.011 13:26:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:40.011 13:26:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:40.011 13:26:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62800' 00:16:40.011 killing process with pid 62800 00:16:40.011 13:26:57 -- common/autotest_common.sh@955 -- # kill 62800 00:16:40.011 13:26:57 -- common/autotest_common.sh@960 -- # wait 62800 00:16:40.272 00:16:40.272 real 0m4.337s 00:16:40.272 user 0m4.850s 00:16:40.272 sys 0m1.182s 00:16:40.272 13:26:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:40.272 ************************************ 00:16:40.272 END TEST locking_app_on_unlocked_coremask 00:16:40.272 ************************************ 00:16:40.272 13:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.272 13:26:57 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:16:40.272 13:26:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:40.272 13:26:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.272 13:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 ************************************ 00:16:40.532 START TEST locking_app_on_locked_coremask 00:16:40.532 ************************************ 00:16:40.532 13:26:57 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:16:40.532 13:26:57 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62883 00:16:40.532 13:26:57 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:40.532 13:26:57 -- event/cpu_locks.sh@116 -- # waitforlisten 62883 /var/tmp/spdk.sock 00:16:40.532 13:26:57 -- common/autotest_common.sh@817 -- # '[' -z 62883 ']' 00:16:40.532 13:26:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.532 13:26:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.532 13:26:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.532 13:26:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.532 13:26:57 -- common/autotest_common.sh@10 -- # set +x 00:16:40.532 [2024-04-26 13:26:57.808470] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:40.532 [2024-04-26 13:26:57.808594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62883 ] 00:16:40.532 [2024-04-26 13:26:57.943245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.791 [2024-04-26 13:26:58.075554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.728 13:26:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:41.728 13:26:58 -- common/autotest_common.sh@850 -- # return 0 00:16:41.728 13:26:58 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62911 00:16:41.728 13:26:58 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:41.728 13:26:58 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62911 /var/tmp/spdk2.sock 00:16:41.728 13:26:58 -- common/autotest_common.sh@638 -- # local es=0 00:16:41.728 13:26:58 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62911 /var/tmp/spdk2.sock 00:16:41.728 13:26:58 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:41.728 13:26:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.728 13:26:58 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:41.728 13:26:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:41.728 13:26:58 -- common/autotest_common.sh@641 -- # waitforlisten 62911 /var/tmp/spdk2.sock 00:16:41.728 13:26:58 -- common/autotest_common.sh@817 -- # '[' -z 62911 ']' 00:16:41.728 13:26:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:41.728 13:26:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:41.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:41.728 13:26:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:41.728 13:26:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:41.728 13:26:58 -- common/autotest_common.sh@10 -- # set +x 00:16:41.728 [2024-04-26 13:26:58.912255] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:41.728 [2024-04-26 13:26:58.912385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62911 ] 00:16:41.728 [2024-04-26 13:26:59.055767] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62883 has claimed it. 00:16:41.728 [2024-04-26 13:26:59.055892] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:42.295 ERROR: process (pid: 62911) is no longer running 00:16:42.295 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62911) - No such process 00:16:42.295 13:26:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:42.295 13:26:59 -- common/autotest_common.sh@850 -- # return 1 00:16:42.295 13:26:59 -- common/autotest_common.sh@641 -- # es=1 00:16:42.295 13:26:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:42.295 13:26:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:42.295 13:26:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:42.295 13:26:59 -- event/cpu_locks.sh@122 -- # locks_exist 62883 00:16:42.295 13:26:59 -- event/cpu_locks.sh@22 -- # lslocks -p 62883 00:16:42.295 13:26:59 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:42.862 13:27:00 -- event/cpu_locks.sh@124 -- # killprocess 62883 00:16:42.862 13:27:00 -- common/autotest_common.sh@936 -- # '[' -z 62883 ']' 00:16:42.862 13:27:00 -- common/autotest_common.sh@940 -- # kill -0 62883 00:16:42.862 13:27:00 -- common/autotest_common.sh@941 -- # uname 00:16:42.862 13:27:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.862 13:27:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62883 00:16:42.862 13:27:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.862 13:27:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.862 killing process with pid 62883 00:16:42.862 13:27:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62883' 00:16:42.862 13:27:00 -- common/autotest_common.sh@955 -- # kill 62883 00:16:42.862 13:27:00 -- common/autotest_common.sh@960 -- # wait 62883 00:16:43.119 00:16:43.119 real 0m2.781s 00:16:43.119 user 0m3.252s 00:16:43.119 sys 0m0.727s 00:16:43.119 13:27:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.119 13:27:00 -- common/autotest_common.sh@10 -- # set +x 00:16:43.119 ************************************ 00:16:43.119 END TEST locking_app_on_locked_coremask 00:16:43.119 ************************************ 00:16:43.119 13:27:00 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:43.119 13:27:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:43.119 13:27:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.119 13:27:00 -- common/autotest_common.sh@10 -- # set +x 00:16:43.377 ************************************ 00:16:43.377 START TEST locking_overlapped_coremask 00:16:43.377 ************************************ 00:16:43.377 13:27:00 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:16:43.377 13:27:00 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62967 00:16:43.377 13:27:00 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:43.377 13:27:00 -- event/cpu_locks.sh@133 -- # waitforlisten 62967 /var/tmp/spdk.sock 00:16:43.377 13:27:00 -- common/autotest_common.sh@817 -- # '[' -z 62967 ']' 00:16:43.377 13:27:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.377 13:27:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:43.377 13:27:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.377 13:27:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:43.377 13:27:00 -- common/autotest_common.sh@10 -- # set +x 00:16:43.377 [2024-04-26 13:27:00.711938] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:43.377 [2024-04-26 13:27:00.712247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62967 ] 00:16:43.635 [2024-04-26 13:27:00.851249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.635 [2024-04-26 13:27:00.967362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.635 [2024-04-26 13:27:00.967525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.635 [2024-04-26 13:27:00.967529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.565 13:27:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:44.565 13:27:01 -- common/autotest_common.sh@850 -- # return 0 00:16:44.565 13:27:01 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62997 00:16:44.565 13:27:01 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:44.565 13:27:01 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62997 /var/tmp/spdk2.sock 00:16:44.565 13:27:01 -- common/autotest_common.sh@638 -- # local es=0 00:16:44.565 13:27:01 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62997 /var/tmp/spdk2.sock 00:16:44.565 13:27:01 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:16:44.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:44.565 13:27:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:44.565 13:27:01 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:16:44.565 13:27:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:44.565 13:27:01 -- common/autotest_common.sh@641 -- # waitforlisten 62997 /var/tmp/spdk2.sock 00:16:44.565 13:27:01 -- common/autotest_common.sh@817 -- # '[' -z 62997 ']' 00:16:44.565 13:27:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:44.565 13:27:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.565 13:27:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:44.565 13:27:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.565 13:27:01 -- common/autotest_common.sh@10 -- # set +x 00:16:44.565 [2024-04-26 13:27:01.757755] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:44.566 [2024-04-26 13:27:01.759069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62997 ] 00:16:44.566 [2024-04-26 13:27:01.909859] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62967 has claimed it. 00:16:44.566 [2024-04-26 13:27:01.909947] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:45.131 ERROR: process (pid: 62997) is no longer running 00:16:45.131 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62997) - No such process 00:16:45.131 13:27:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.131 13:27:02 -- common/autotest_common.sh@850 -- # return 1 00:16:45.131 13:27:02 -- common/autotest_common.sh@641 -- # es=1 00:16:45.131 13:27:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:45.131 13:27:02 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:45.131 13:27:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:45.131 13:27:02 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:45.131 13:27:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:45.131 13:27:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:45.131 13:27:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:45.131 13:27:02 -- event/cpu_locks.sh@141 -- # killprocess 62967 00:16:45.131 13:27:02 -- common/autotest_common.sh@936 -- # '[' -z 62967 ']' 00:16:45.131 13:27:02 -- common/autotest_common.sh@940 -- # kill -0 62967 00:16:45.131 13:27:02 -- common/autotest_common.sh@941 -- # uname 00:16:45.131 13:27:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:45.131 13:27:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62967 00:16:45.131 killing process with pid 62967 00:16:45.131 13:27:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:45.131 13:27:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:45.131 13:27:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62967' 00:16:45.131 13:27:02 -- common/autotest_common.sh@955 -- # kill 62967 00:16:45.131 13:27:02 -- common/autotest_common.sh@960 -- # wait 62967 00:16:45.697 00:16:45.697 real 0m2.280s 00:16:45.697 user 0m6.236s 00:16:45.697 sys 0m0.490s 00:16:45.697 ************************************ 00:16:45.697 END TEST locking_overlapped_coremask 00:16:45.697 ************************************ 00:16:45.697 13:27:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:45.697 13:27:02 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 13:27:02 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:45.697 13:27:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:45.697 13:27:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:45.697 13:27:02 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 ************************************ 00:16:45.697 START TEST locking_overlapped_coremask_via_rpc 00:16:45.697 ************************************ 00:16:45.697 13:27:03 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:16:45.697 13:27:03 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63052 00:16:45.697 13:27:03 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:45.697 13:27:03 -- event/cpu_locks.sh@149 -- # waitforlisten 63052 /var/tmp/spdk.sock 00:16:45.697 13:27:03 -- common/autotest_common.sh@817 -- # '[' -z 63052 ']' 00:16:45.697 13:27:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.697 13:27:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:45.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.697 13:27:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.697 13:27:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:45.697 13:27:03 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 [2024-04-26 13:27:03.101698] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:45.697 [2024-04-26 13:27:03.101833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63052 ] 00:16:45.966 [2024-04-26 13:27:03.230992] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:45.966 [2024-04-26 13:27:03.231041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.966 [2024-04-26 13:27:03.341718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.966 [2024-04-26 13:27:03.341872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.966 [2024-04-26 13:27:03.341876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.936 13:27:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:46.936 13:27:04 -- common/autotest_common.sh@850 -- # return 0 00:16:46.936 13:27:04 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63082 00:16:46.936 13:27:04 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:46.936 13:27:04 -- event/cpu_locks.sh@153 -- # waitforlisten 63082 /var/tmp/spdk2.sock 00:16:46.936 13:27:04 -- common/autotest_common.sh@817 -- # '[' -z 63082 ']' 00:16:46.936 13:27:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:46.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:46.936 13:27:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:46.936 13:27:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:46.936 13:27:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:46.936 13:27:04 -- common/autotest_common.sh@10 -- # set +x 00:16:46.936 [2024-04-26 13:27:04.110079] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:46.936 [2024-04-26 13:27:04.110187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63082 ] 00:16:46.936 [2024-04-26 13:27:04.257233] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:46.936 [2024-04-26 13:27:04.257322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:47.194 [2024-04-26 13:27:04.500354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:47.194 [2024-04-26 13:27:04.500474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:47.194 [2024-04-26 13:27:04.500474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:47.759 13:27:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.759 13:27:05 -- common/autotest_common.sh@850 -- # return 0 00:16:47.759 13:27:05 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:47.759 13:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.759 13:27:05 -- common/autotest_common.sh@10 -- # set +x 00:16:47.759 13:27:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.759 13:27:05 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:47.759 13:27:05 -- common/autotest_common.sh@638 -- # local es=0 00:16:47.759 13:27:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:47.759 13:27:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:47.759 13:27:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:47.759 13:27:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:47.759 13:27:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:47.759 13:27:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:47.759 13:27:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.759 13:27:05 -- common/autotest_common.sh@10 -- # set +x 00:16:47.759 [2024-04-26 13:27:05.151954] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63052 has claimed it. 00:16:47.759 2024/04/26 13:27:05 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:16:47.759 request: 00:16:47.759 { 00:16:47.759 "method": "framework_enable_cpumask_locks", 00:16:47.759 "params": {} 00:16:47.759 } 00:16:47.759 Got JSON-RPC error response 00:16:47.759 GoRPCClient: error on JSON-RPC call 00:16:47.759 13:27:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:47.759 13:27:05 -- common/autotest_common.sh@641 -- # es=1 00:16:47.759 13:27:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:47.759 13:27:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:47.759 13:27:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:47.759 13:27:05 -- event/cpu_locks.sh@158 -- # waitforlisten 63052 /var/tmp/spdk.sock 00:16:47.759 13:27:05 -- common/autotest_common.sh@817 -- # '[' -z 63052 ']' 00:16:47.759 13:27:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.759 13:27:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.759 13:27:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.759 13:27:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.759 13:27:05 -- common/autotest_common.sh@10 -- # set +x 00:16:48.018 13:27:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.018 13:27:05 -- common/autotest_common.sh@850 -- # return 0 00:16:48.018 13:27:05 -- event/cpu_locks.sh@159 -- # waitforlisten 63082 /var/tmp/spdk2.sock 00:16:48.018 13:27:05 -- common/autotest_common.sh@817 -- # '[' -z 63082 ']' 00:16:48.018 13:27:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:48.018 13:27:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:48.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:48.018 13:27:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:48.018 13:27:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:48.018 13:27:05 -- common/autotest_common.sh@10 -- # set +x 00:16:48.276 13:27:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.276 13:27:05 -- common/autotest_common.sh@850 -- # return 0 00:16:48.276 13:27:05 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:48.276 13:27:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:48.276 13:27:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:48.277 13:27:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:48.277 00:16:48.277 real 0m2.630s 00:16:48.277 user 0m1.330s 00:16:48.277 sys 0m0.242s 00:16:48.277 13:27:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:48.277 13:27:05 -- common/autotest_common.sh@10 -- # set +x 00:16:48.277 ************************************ 00:16:48.277 END TEST locking_overlapped_coremask_via_rpc 00:16:48.277 ************************************ 00:16:48.277 13:27:05 -- event/cpu_locks.sh@174 -- # cleanup 00:16:48.277 13:27:05 -- event/cpu_locks.sh@15 -- # [[ -z 63052 ]] 00:16:48.277 13:27:05 -- event/cpu_locks.sh@15 -- # killprocess 63052 00:16:48.277 13:27:05 -- common/autotest_common.sh@936 -- # '[' -z 63052 ']' 00:16:48.277 13:27:05 -- common/autotest_common.sh@940 -- # kill -0 63052 00:16:48.277 13:27:05 -- common/autotest_common.sh@941 -- # uname 00:16:48.277 13:27:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.277 13:27:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63052 00:16:48.535 13:27:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:48.535 13:27:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:48.535 13:27:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63052' 00:16:48.535 killing process with pid 63052 00:16:48.535 13:27:05 -- common/autotest_common.sh@955 -- # kill 63052 00:16:48.535 13:27:05 -- common/autotest_common.sh@960 -- # wait 63052 00:16:48.794 13:27:06 -- event/cpu_locks.sh@16 -- # [[ -z 63082 ]] 00:16:48.794 13:27:06 -- event/cpu_locks.sh@16 -- # killprocess 63082 00:16:48.794 13:27:06 -- common/autotest_common.sh@936 -- # '[' -z 63082 ']' 00:16:48.794 13:27:06 -- common/autotest_common.sh@940 -- # kill -0 63082 00:16:48.794 13:27:06 -- common/autotest_common.sh@941 -- # uname 00:16:48.794 13:27:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.794 13:27:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63082 00:16:48.794 13:27:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:16:48.794 13:27:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:16:48.794 13:27:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63082' 00:16:48.794 killing process with pid 63082 00:16:48.794 13:27:06 -- common/autotest_common.sh@955 -- # kill 63082 00:16:48.794 13:27:06 -- common/autotest_common.sh@960 -- # wait 63082 00:16:49.362 13:27:06 -- event/cpu_locks.sh@18 -- # rm -f 00:16:49.362 13:27:06 -- event/cpu_locks.sh@1 -- # cleanup 00:16:49.362 13:27:06 -- event/cpu_locks.sh@15 -- # [[ -z 63052 ]] 00:16:49.362 13:27:06 -- event/cpu_locks.sh@15 -- # killprocess 63052 00:16:49.362 13:27:06 -- common/autotest_common.sh@936 -- # '[' -z 63052 ']' 00:16:49.362 13:27:06 -- common/autotest_common.sh@940 -- # kill -0 63052 00:16:49.362 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63052) - No such process 00:16:49.362 Process with pid 63052 is not found 00:16:49.362 13:27:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63052 is not found' 00:16:49.362 13:27:06 -- event/cpu_locks.sh@16 -- # [[ -z 63082 ]] 00:16:49.362 13:27:06 -- event/cpu_locks.sh@16 -- # killprocess 63082 00:16:49.362 13:27:06 -- common/autotest_common.sh@936 -- # '[' -z 63082 ']' 00:16:49.362 13:27:06 -- common/autotest_common.sh@940 -- # kill -0 63082 00:16:49.362 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63082) - No such process 00:16:49.362 Process with pid 63082 is not found 00:16:49.362 13:27:06 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63082 is not found' 00:16:49.362 13:27:06 -- event/cpu_locks.sh@18 -- # rm -f 00:16:49.362 00:16:49.362 real 0m22.126s 00:16:49.362 user 0m37.411s 00:16:49.362 sys 0m6.072s 00:16:49.362 13:27:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:49.362 ************************************ 00:16:49.362 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:16:49.362 END TEST cpu_locks 00:16:49.362 ************************************ 00:16:49.362 00:16:49.362 real 0m51.267s 00:16:49.362 user 1m36.571s 00:16:49.362 sys 0m10.389s 00:16:49.362 13:27:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:49.362 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:16:49.362 ************************************ 00:16:49.362 END TEST event 00:16:49.362 ************************************ 00:16:49.362 13:27:06 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:49.362 13:27:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:49.362 13:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.362 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:16:49.362 ************************************ 00:16:49.362 START TEST thread 00:16:49.362 ************************************ 00:16:49.362 13:27:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:49.621 * Looking for test storage... 00:16:49.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:16:49.621 13:27:06 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:49.621 13:27:06 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:49.621 13:27:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:49.621 13:27:06 -- common/autotest_common.sh@10 -- # set +x 00:16:49.621 ************************************ 00:16:49.621 START TEST thread_poller_perf 00:16:49.621 ************************************ 00:16:49.621 13:27:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:49.621 [2024-04-26 13:27:06.969978] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:49.621 [2024-04-26 13:27:06.970076] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63238 ] 00:16:49.881 [2024-04-26 13:27:07.107657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.881 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:49.881 [2024-04-26 13:27:07.235895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.310 ====================================== 00:16:51.310 busy:2213222436 (cyc) 00:16:51.310 total_run_count: 304000 00:16:51.310 tsc_hz: 2200000000 (cyc) 00:16:51.310 ====================================== 00:16:51.310 poller_cost: 7280 (cyc), 3309 (nsec) 00:16:51.310 00:16:51.310 real 0m1.407s 00:16:51.310 user 0m1.242s 00:16:51.310 sys 0m0.057s 00:16:51.310 13:27:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:51.310 ************************************ 00:16:51.310 END TEST thread_poller_perf 00:16:51.310 ************************************ 00:16:51.310 13:27:08 -- common/autotest_common.sh@10 -- # set +x 00:16:51.310 13:27:08 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:51.310 13:27:08 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:16:51.310 13:27:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:51.310 13:27:08 -- common/autotest_common.sh@10 -- # set +x 00:16:51.310 ************************************ 00:16:51.310 START TEST thread_poller_perf 00:16:51.310 ************************************ 00:16:51.310 13:27:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:51.310 [2024-04-26 13:27:08.498247] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:51.310 [2024-04-26 13:27:08.498319] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63283 ] 00:16:51.310 [2024-04-26 13:27:08.632206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.588 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:51.588 [2024-04-26 13:27:08.748710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.524 ====================================== 00:16:52.524 busy:2202084956 (cyc) 00:16:52.524 total_run_count: 4156000 00:16:52.524 tsc_hz: 2200000000 (cyc) 00:16:52.524 ====================================== 00:16:52.524 poller_cost: 529 (cyc), 240 (nsec) 00:16:52.524 00:16:52.524 real 0m1.380s 00:16:52.524 user 0m1.216s 00:16:52.524 sys 0m0.056s 00:16:52.524 13:27:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.524 ************************************ 00:16:52.524 END TEST thread_poller_perf 00:16:52.524 ************************************ 00:16:52.524 13:27:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.524 13:27:09 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:52.524 00:16:52.524 real 0m3.109s 00:16:52.524 user 0m2.567s 00:16:52.524 sys 0m0.295s 00:16:52.524 13:27:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:52.524 13:27:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.524 ************************************ 00:16:52.524 END TEST thread 00:16:52.524 ************************************ 00:16:52.524 13:27:09 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:52.524 13:27:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:52.524 13:27:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:52.524 13:27:09 -- common/autotest_common.sh@10 -- # set +x 00:16:52.782 ************************************ 00:16:52.782 START TEST accel 00:16:52.782 ************************************ 00:16:52.782 13:27:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:16:52.782 * Looking for test storage... 00:16:52.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:16:52.783 13:27:10 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:16:52.783 13:27:10 -- accel/accel.sh@82 -- # get_expected_opcs 00:16:52.783 13:27:10 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:16:52.783 13:27:10 -- accel/accel.sh@62 -- # spdk_tgt_pid=63357 00:16:52.783 13:27:10 -- accel/accel.sh@63 -- # waitforlisten 63357 00:16:52.783 13:27:10 -- common/autotest_common.sh@817 -- # '[' -z 63357 ']' 00:16:52.783 13:27:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.783 13:27:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:52.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.783 13:27:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.783 13:27:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:52.783 13:27:10 -- common/autotest_common.sh@10 -- # set +x 00:16:52.783 13:27:10 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:16:52.783 13:27:10 -- accel/accel.sh@61 -- # build_accel_config 00:16:52.783 13:27:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:52.783 13:27:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:52.783 13:27:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:52.783 13:27:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:52.783 13:27:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:52.783 13:27:10 -- accel/accel.sh@40 -- # local IFS=, 00:16:52.783 13:27:10 -- accel/accel.sh@41 -- # jq -r . 00:16:52.783 [2024-04-26 13:27:10.173400] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:52.783 [2024-04-26 13:27:10.173506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63357 ] 00:16:53.041 [2024-04-26 13:27:10.311624] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.041 [2024-04-26 13:27:10.428651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.977 13:27:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:53.977 13:27:11 -- common/autotest_common.sh@850 -- # return 0 00:16:53.977 13:27:11 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:16:53.977 13:27:11 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:16:53.977 13:27:11 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:16:53.977 13:27:11 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:16:53.977 13:27:11 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:16:53.977 13:27:11 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:16:53.977 13:27:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.977 13:27:11 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:16:53.977 13:27:11 -- common/autotest_common.sh@10 -- # set +x 00:16:53.977 13:27:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # IFS== 00:16:53.977 13:27:11 -- accel/accel.sh@72 -- # read -r opc module 00:16:53.977 13:27:11 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:16:53.977 13:27:11 -- accel/accel.sh@75 -- # killprocess 63357 00:16:53.977 13:27:11 -- common/autotest_common.sh@936 -- # '[' -z 63357 ']' 00:16:53.977 13:27:11 -- common/autotest_common.sh@940 -- # kill -0 63357 00:16:53.977 13:27:11 -- common/autotest_common.sh@941 -- # uname 00:16:53.977 13:27:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.977 13:27:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63357 00:16:53.977 13:27:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.977 killing process with pid 63357 00:16:53.977 13:27:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.977 13:27:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63357' 00:16:53.977 13:27:11 -- common/autotest_common.sh@955 -- # kill 63357 00:16:53.977 13:27:11 -- common/autotest_common.sh@960 -- # wait 63357 00:16:54.548 13:27:11 -- accel/accel.sh@76 -- # trap - ERR 00:16:54.548 13:27:11 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:16:54.548 13:27:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.548 13:27:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.548 13:27:11 -- common/autotest_common.sh@10 -- # set +x 00:16:54.548 13:27:11 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:16:54.548 13:27:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:16:54.548 13:27:11 -- accel/accel.sh@12 -- # build_accel_config 00:16:54.548 13:27:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:54.548 13:27:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:54.548 13:27:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:54.548 13:27:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:54.548 13:27:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:54.548 13:27:11 -- accel/accel.sh@40 -- # local IFS=, 00:16:54.548 13:27:11 -- accel/accel.sh@41 -- # jq -r . 00:16:54.548 13:27:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:54.548 13:27:11 -- common/autotest_common.sh@10 -- # set +x 00:16:54.548 13:27:11 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:16:54.548 13:27:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:54.548 13:27:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.548 13:27:11 -- common/autotest_common.sh@10 -- # set +x 00:16:54.548 ************************************ 00:16:54.548 START TEST accel_missing_filename 00:16:54.548 ************************************ 00:16:54.548 13:27:11 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:16:54.548 13:27:11 -- common/autotest_common.sh@638 -- # local es=0 00:16:54.548 13:27:11 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:16:54.548 13:27:11 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:54.548 13:27:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:54.548 13:27:11 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:54.548 13:27:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:54.548 13:27:11 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:16:54.548 13:27:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:16:54.548 13:27:11 -- accel/accel.sh@12 -- # build_accel_config 00:16:54.548 13:27:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:54.548 13:27:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:54.548 13:27:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:54.548 13:27:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:54.548 13:27:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:54.548 13:27:11 -- accel/accel.sh@40 -- # local IFS=, 00:16:54.548 13:27:11 -- accel/accel.sh@41 -- # jq -r . 00:16:54.548 [2024-04-26 13:27:11.932581] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:54.548 [2024-04-26 13:27:11.932674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63442 ] 00:16:54.807 [2024-04-26 13:27:12.073096] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.807 [2024-04-26 13:27:12.170836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.807 [2024-04-26 13:27:12.229039] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:55.066 [2024-04-26 13:27:12.310833] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:16:55.066 A filename is required. 00:16:55.066 13:27:12 -- common/autotest_common.sh@641 -- # es=234 00:16:55.066 13:27:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:55.066 13:27:12 -- common/autotest_common.sh@650 -- # es=106 00:16:55.066 13:27:12 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:55.066 13:27:12 -- common/autotest_common.sh@658 -- # es=1 00:16:55.066 13:27:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:55.066 00:16:55.066 real 0m0.524s 00:16:55.066 user 0m0.380s 00:16:55.066 sys 0m0.120s 00:16:55.066 13:27:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:55.066 13:27:12 -- common/autotest_common.sh@10 -- # set +x 00:16:55.066 ************************************ 00:16:55.066 END TEST accel_missing_filename 00:16:55.066 ************************************ 00:16:55.066 13:27:12 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.066 13:27:12 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:55.066 13:27:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.066 13:27:12 -- common/autotest_common.sh@10 -- # set +x 00:16:55.326 ************************************ 00:16:55.326 START TEST accel_compress_verify 00:16:55.326 ************************************ 00:16:55.326 13:27:12 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.326 13:27:12 -- common/autotest_common.sh@638 -- # local es=0 00:16:55.326 13:27:12 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.326 13:27:12 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:55.326 13:27:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:55.326 13:27:12 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:55.326 13:27:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:55.326 13:27:12 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.326 13:27:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:16:55.326 13:27:12 -- accel/accel.sh@12 -- # build_accel_config 00:16:55.326 13:27:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:55.326 13:27:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:55.326 13:27:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:55.326 13:27:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:55.326 13:27:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:55.326 13:27:12 -- accel/accel.sh@40 -- # local IFS=, 00:16:55.326 13:27:12 -- accel/accel.sh@41 -- # jq -r . 00:16:55.326 [2024-04-26 13:27:12.581869] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:55.326 [2024-04-26 13:27:12.581993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63477 ] 00:16:55.326 [2024-04-26 13:27:12.721723] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.585 [2024-04-26 13:27:12.836867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.585 [2024-04-26 13:27:12.891799] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:55.585 [2024-04-26 13:27:12.966342] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:16:55.845 00:16:55.845 Compression does not support the verify option, aborting. 00:16:55.845 13:27:13 -- common/autotest_common.sh@641 -- # es=161 00:16:55.845 13:27:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:55.845 13:27:13 -- common/autotest_common.sh@650 -- # es=33 00:16:55.845 13:27:13 -- common/autotest_common.sh@651 -- # case "$es" in 00:16:55.845 13:27:13 -- common/autotest_common.sh@658 -- # es=1 00:16:55.845 13:27:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:55.845 00:16:55.845 real 0m0.530s 00:16:55.845 user 0m0.360s 00:16:55.845 sys 0m0.118s 00:16:55.845 13:27:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:55.845 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:16:55.845 ************************************ 00:16:55.845 END TEST accel_compress_verify 00:16:55.845 ************************************ 00:16:55.845 13:27:13 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:16:55.845 13:27:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:55.845 13:27:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.845 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:16:55.845 ************************************ 00:16:55.845 START TEST accel_wrong_workload 00:16:55.845 ************************************ 00:16:55.845 13:27:13 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:16:55.845 13:27:13 -- common/autotest_common.sh@638 -- # local es=0 00:16:55.845 13:27:13 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:16:55.845 13:27:13 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:55.845 13:27:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:55.845 13:27:13 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:55.845 13:27:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:55.845 13:27:13 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:16:55.845 13:27:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:16:55.845 13:27:13 -- accel/accel.sh@12 -- # build_accel_config 00:16:55.845 13:27:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:55.845 13:27:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:55.845 13:27:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:55.845 13:27:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:55.845 13:27:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:55.845 13:27:13 -- accel/accel.sh@40 -- # local IFS=, 00:16:55.845 13:27:13 -- accel/accel.sh@41 -- # jq -r . 00:16:55.845 Unsupported workload type: foobar 00:16:55.845 [2024-04-26 13:27:13.242481] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:16:55.845 accel_perf options: 00:16:55.845 [-h help message] 00:16:55.845 [-q queue depth per core] 00:16:55.845 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:55.845 [-T number of threads per core 00:16:55.845 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:55.845 [-t time in seconds] 00:16:55.845 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:55.845 [ dif_verify, , dif_generate, dif_generate_copy 00:16:55.845 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:55.845 [-l for compress/decompress workloads, name of uncompressed input file 00:16:55.845 [-S for crc32c workload, use this seed value (default 0) 00:16:55.845 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:55.845 [-f for fill workload, use this BYTE value (default 255) 00:16:55.845 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:55.845 [-y verify result if this switch is on] 00:16:55.845 [-a tasks to allocate per core (default: same value as -q)] 00:16:55.845 Can be used to spread operations across a wider range of memory. 00:16:55.845 13:27:13 -- common/autotest_common.sh@641 -- # es=1 00:16:55.845 13:27:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:55.845 13:27:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:55.845 13:27:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:55.845 00:16:55.845 real 0m0.034s 00:16:55.845 user 0m0.014s 00:16:55.845 sys 0m0.019s 00:16:55.845 13:27:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:55.845 ************************************ 00:16:55.845 END TEST accel_wrong_workload 00:16:55.845 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:16:55.845 ************************************ 00:16:55.845 13:27:13 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:16:55.845 13:27:13 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:16:55.845 13:27:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.845 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:16:56.103 ************************************ 00:16:56.103 START TEST accel_negative_buffers 00:16:56.103 ************************************ 00:16:56.103 13:27:13 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:16:56.103 13:27:13 -- common/autotest_common.sh@638 -- # local es=0 00:16:56.103 13:27:13 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:16:56.103 13:27:13 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:16:56.103 13:27:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:56.103 13:27:13 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:16:56.103 13:27:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:56.103 13:27:13 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:16:56.103 13:27:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:16:56.103 13:27:13 -- accel/accel.sh@12 -- # build_accel_config 00:16:56.103 13:27:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:56.103 13:27:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:56.103 13:27:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:56.103 13:27:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:56.103 13:27:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:56.103 13:27:13 -- accel/accel.sh@40 -- # local IFS=, 00:16:56.103 13:27:13 -- accel/accel.sh@41 -- # jq -r . 00:16:56.103 -x option must be non-negative. 00:16:56.103 [2024-04-26 13:27:13.391344] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:16:56.103 accel_perf options: 00:16:56.103 [-h help message] 00:16:56.103 [-q queue depth per core] 00:16:56.103 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:16:56.103 [-T number of threads per core 00:16:56.103 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:16:56.103 [-t time in seconds] 00:16:56.103 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:16:56.103 [ dif_verify, , dif_generate, dif_generate_copy 00:16:56.103 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:16:56.103 [-l for compress/decompress workloads, name of uncompressed input file 00:16:56.103 [-S for crc32c workload, use this seed value (default 0) 00:16:56.103 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:16:56.103 [-f for fill workload, use this BYTE value (default 255) 00:16:56.103 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:16:56.103 [-y verify result if this switch is on] 00:16:56.103 [-a tasks to allocate per core (default: same value as -q)] 00:16:56.103 Can be used to spread operations across a wider range of memory. 00:16:56.103 13:27:13 -- common/autotest_common.sh@641 -- # es=1 00:16:56.103 13:27:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:56.103 13:27:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:56.103 13:27:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:56.103 00:16:56.103 real 0m0.031s 00:16:56.103 user 0m0.016s 00:16:56.103 sys 0m0.015s 00:16:56.103 ************************************ 00:16:56.103 END TEST accel_negative_buffers 00:16:56.103 ************************************ 00:16:56.103 13:27:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.103 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:16:56.103 13:27:13 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:16:56.103 13:27:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:56.103 13:27:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.103 13:27:13 -- common/autotest_common.sh@10 -- # set +x 00:16:56.103 ************************************ 00:16:56.103 START TEST accel_crc32c 00:16:56.103 ************************************ 00:16:56.103 13:27:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:16:56.103 13:27:13 -- accel/accel.sh@16 -- # local accel_opc 00:16:56.103 13:27:13 -- accel/accel.sh@17 -- # local accel_module 00:16:56.103 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.103 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.103 13:27:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:16:56.103 13:27:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:16:56.103 13:27:13 -- accel/accel.sh@12 -- # build_accel_config 00:16:56.103 13:27:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:56.103 13:27:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:56.103 13:27:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:56.103 13:27:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:56.103 13:27:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:56.103 13:27:13 -- accel/accel.sh@40 -- # local IFS=, 00:16:56.103 13:27:13 -- accel/accel.sh@41 -- # jq -r . 00:16:56.103 [2024-04-26 13:27:13.518683] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:56.103 [2024-04-26 13:27:13.518773] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63553 ] 00:16:56.362 [2024-04-26 13:27:13.658968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.362 [2024-04-26 13:27:13.783833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val= 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val= 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=0x1 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val= 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val= 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=crc32c 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=32 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val= 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=software 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@22 -- # accel_module=software 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=32 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=32 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=1 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val=Yes 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val= 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:56.621 13:27:13 -- accel/accel.sh@20 -- # val= 00:16:56.621 13:27:13 -- accel/accel.sh@21 -- # case "$var" in 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # IFS=: 00:16:56.621 13:27:13 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.021 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.021 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.021 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.021 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.021 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.021 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:58.021 13:27:15 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:58.021 13:27:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:58.021 00:16:58.021 real 0m1.543s 00:16:58.021 user 0m1.325s 00:16:58.021 sys 0m0.125s 00:16:58.021 13:27:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:58.021 13:27:15 -- common/autotest_common.sh@10 -- # set +x 00:16:58.021 ************************************ 00:16:58.021 END TEST accel_crc32c 00:16:58.021 ************************************ 00:16:58.021 13:27:15 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:16:58.021 13:27:15 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:16:58.021 13:27:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.021 13:27:15 -- common/autotest_common.sh@10 -- # set +x 00:16:58.021 ************************************ 00:16:58.021 START TEST accel_crc32c_C2 00:16:58.021 ************************************ 00:16:58.021 13:27:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:16:58.021 13:27:15 -- accel/accel.sh@16 -- # local accel_opc 00:16:58.021 13:27:15 -- accel/accel.sh@17 -- # local accel_module 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.021 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.021 13:27:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:16:58.021 13:27:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:16:58.021 13:27:15 -- accel/accel.sh@12 -- # build_accel_config 00:16:58.021 13:27:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:58.021 13:27:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:58.021 13:27:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:58.021 13:27:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:58.021 13:27:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:58.021 13:27:15 -- accel/accel.sh@40 -- # local IFS=, 00:16:58.021 13:27:15 -- accel/accel.sh@41 -- # jq -r . 00:16:58.021 [2024-04-26 13:27:15.179458] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:58.021 [2024-04-26 13:27:15.179554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:16:58.021 [2024-04-26 13:27:15.319816] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.021 [2024-04-26 13:27:15.440995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.280 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.280 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.280 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.280 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.280 13:27:15 -- accel/accel.sh@20 -- # val=0x1 00:16:58.280 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.280 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.280 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.280 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.280 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.280 13:27:15 -- accel/accel.sh@20 -- # val=crc32c 00:16:58.280 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.280 13:27:15 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.280 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.280 13:27:15 -- accel/accel.sh@20 -- # val=0 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val=software 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@22 -- # accel_module=software 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val=32 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val=32 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val=1 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val=Yes 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:58.281 13:27:15 -- accel/accel.sh@20 -- # val= 00:16:58.281 13:27:15 -- accel/accel.sh@21 -- # case "$var" in 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # IFS=: 00:16:58.281 13:27:15 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@20 -- # val= 00:16:59.657 13:27:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # IFS=: 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@20 -- # val= 00:16:59.657 13:27:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # IFS=: 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@20 -- # val= 00:16:59.657 13:27:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # IFS=: 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@20 -- # val= 00:16:59.657 13:27:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # IFS=: 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@20 -- # val= 00:16:59.657 13:27:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # IFS=: 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@20 -- # val= 00:16:59.657 13:27:16 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # IFS=: 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:16:59.657 13:27:16 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:16:59.657 13:27:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:59.657 00:16:59.657 real 0m1.540s 00:16:59.657 user 0m1.323s 00:16:59.657 sys 0m0.120s 00:16:59.657 13:27:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:59.657 ************************************ 00:16:59.657 END TEST accel_crc32c_C2 00:16:59.657 ************************************ 00:16:59.657 13:27:16 -- common/autotest_common.sh@10 -- # set +x 00:16:59.657 13:27:16 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:16:59.657 13:27:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:16:59.657 13:27:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.657 13:27:16 -- common/autotest_common.sh@10 -- # set +x 00:16:59.657 ************************************ 00:16:59.657 START TEST accel_copy 00:16:59.657 ************************************ 00:16:59.657 13:27:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:16:59.657 13:27:16 -- accel/accel.sh@16 -- # local accel_opc 00:16:59.657 13:27:16 -- accel/accel.sh@17 -- # local accel_module 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # IFS=: 00:16:59.657 13:27:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:16:59.657 13:27:16 -- accel/accel.sh@19 -- # read -r var val 00:16:59.657 13:27:16 -- accel/accel.sh@12 -- # build_accel_config 00:16:59.657 13:27:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:16:59.657 13:27:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:16:59.657 13:27:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:16:59.657 13:27:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:16:59.657 13:27:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:16:59.657 13:27:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:16:59.657 13:27:16 -- accel/accel.sh@40 -- # local IFS=, 00:16:59.657 13:27:16 -- accel/accel.sh@41 -- # jq -r . 00:16:59.657 [2024-04-26 13:27:16.835569] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:16:59.657 [2024-04-26 13:27:16.835656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63630 ] 00:16:59.657 [2024-04-26 13:27:16.975219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.657 [2024-04-26 13:27:17.099168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val= 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val= 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val=0x1 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val= 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val= 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val=copy 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@23 -- # accel_opc=copy 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val= 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val=software 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@22 -- # accel_module=software 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val=32 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val=32 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val=1 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val=Yes 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val= 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:16:59.954 13:27:17 -- accel/accel.sh@20 -- # val= 00:16:59.954 13:27:17 -- accel/accel.sh@21 -- # case "$var" in 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # IFS=: 00:16:59.954 13:27:17 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.328 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.328 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.328 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.328 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.328 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.328 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:01.328 13:27:18 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:17:01.328 13:27:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:01.328 00:17:01.328 real 0m1.549s 00:17:01.328 user 0m1.338s 00:17:01.328 sys 0m0.115s 00:17:01.328 ************************************ 00:17:01.328 END TEST accel_copy 00:17:01.328 ************************************ 00:17:01.328 13:27:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:01.328 13:27:18 -- common/autotest_common.sh@10 -- # set +x 00:17:01.328 13:27:18 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:01.328 13:27:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:01.328 13:27:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.328 13:27:18 -- common/autotest_common.sh@10 -- # set +x 00:17:01.328 ************************************ 00:17:01.328 START TEST accel_fill 00:17:01.328 ************************************ 00:17:01.328 13:27:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:01.328 13:27:18 -- accel/accel.sh@16 -- # local accel_opc 00:17:01.328 13:27:18 -- accel/accel.sh@17 -- # local accel_module 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.328 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.328 13:27:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:01.328 13:27:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:17:01.328 13:27:18 -- accel/accel.sh@12 -- # build_accel_config 00:17:01.328 13:27:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:01.328 13:27:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:01.328 13:27:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:01.328 13:27:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:01.328 13:27:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:01.328 13:27:18 -- accel/accel.sh@40 -- # local IFS=, 00:17:01.328 13:27:18 -- accel/accel.sh@41 -- # jq -r . 00:17:01.328 [2024-04-26 13:27:18.505400] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:01.328 [2024-04-26 13:27:18.505531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63663 ] 00:17:01.328 [2024-04-26 13:27:18.644086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.587 [2024-04-26 13:27:18.779955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=0x1 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=fill 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@23 -- # accel_opc=fill 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=0x80 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=software 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@22 -- # accel_module=software 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=64 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=64 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=1 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val=Yes 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:01.587 13:27:18 -- accel/accel.sh@20 -- # val= 00:17:01.587 13:27:18 -- accel/accel.sh@21 -- # case "$var" in 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # IFS=: 00:17:01.587 13:27:18 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:02.963 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:02.963 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:02.963 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:02.963 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:02.963 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:02.963 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:02.963 13:27:20 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:17:02.963 13:27:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:02.963 00:17:02.963 real 0m1.569s 00:17:02.963 user 0m1.350s 00:17:02.963 sys 0m0.126s 00:17:02.963 13:27:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.963 ************************************ 00:17:02.963 END TEST accel_fill 00:17:02.963 ************************************ 00:17:02.963 13:27:20 -- common/autotest_common.sh@10 -- # set +x 00:17:02.963 13:27:20 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:17:02.963 13:27:20 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:02.963 13:27:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.963 13:27:20 -- common/autotest_common.sh@10 -- # set +x 00:17:02.963 ************************************ 00:17:02.963 START TEST accel_copy_crc32c 00:17:02.963 ************************************ 00:17:02.963 13:27:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:17:02.963 13:27:20 -- accel/accel.sh@16 -- # local accel_opc 00:17:02.963 13:27:20 -- accel/accel.sh@17 -- # local accel_module 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:02.963 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:02.963 13:27:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:17:02.963 13:27:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:17:02.963 13:27:20 -- accel/accel.sh@12 -- # build_accel_config 00:17:02.963 13:27:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:02.963 13:27:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:02.963 13:27:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:02.963 13:27:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:02.963 13:27:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:02.963 13:27:20 -- accel/accel.sh@40 -- # local IFS=, 00:17:02.963 13:27:20 -- accel/accel.sh@41 -- # jq -r . 00:17:02.963 [2024-04-26 13:27:20.190924] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:02.963 [2024-04-26 13:27:20.191006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63707 ] 00:17:02.963 [2024-04-26 13:27:20.329898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.221 [2024-04-26 13:27:20.453544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.221 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:03.221 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.221 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:03.221 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.221 13:27:20 -- accel/accel.sh@20 -- # val=0x1 00:17:03.221 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.221 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:03.221 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.221 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:03.221 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.221 13:27:20 -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:03.221 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.221 13:27:20 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.221 13:27:20 -- accel/accel.sh@20 -- # val=0 00:17:03.221 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.221 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val=software 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@22 -- # accel_module=software 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val=32 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val=32 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val=1 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val=Yes 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:03.222 13:27:20 -- accel/accel.sh@20 -- # val= 00:17:03.222 13:27:20 -- accel/accel.sh@21 -- # case "$var" in 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # IFS=: 00:17:03.222 13:27:20 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@20 -- # val= 00:17:04.597 13:27:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # IFS=: 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@20 -- # val= 00:17:04.597 13:27:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # IFS=: 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@20 -- # val= 00:17:04.597 13:27:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # IFS=: 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@20 -- # val= 00:17:04.597 13:27:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # IFS=: 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@20 -- # val= 00:17:04.597 13:27:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # IFS=: 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@20 -- # val= 00:17:04.597 13:27:21 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # IFS=: 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:04.597 13:27:21 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:04.597 13:27:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:04.597 00:17:04.597 real 0m1.550s 00:17:04.597 user 0m1.331s 00:17:04.597 sys 0m0.124s 00:17:04.597 13:27:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:04.597 ************************************ 00:17:04.597 END TEST accel_copy_crc32c 00:17:04.597 ************************************ 00:17:04.597 13:27:21 -- common/autotest_common.sh@10 -- # set +x 00:17:04.597 13:27:21 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:17:04.597 13:27:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:04.597 13:27:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.597 13:27:21 -- common/autotest_common.sh@10 -- # set +x 00:17:04.597 ************************************ 00:17:04.597 START TEST accel_copy_crc32c_C2 00:17:04.597 ************************************ 00:17:04.597 13:27:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:17:04.597 13:27:21 -- accel/accel.sh@16 -- # local accel_opc 00:17:04.597 13:27:21 -- accel/accel.sh@17 -- # local accel_module 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # IFS=: 00:17:04.597 13:27:21 -- accel/accel.sh@19 -- # read -r var val 00:17:04.597 13:27:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:17:04.597 13:27:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:17:04.597 13:27:21 -- accel/accel.sh@12 -- # build_accel_config 00:17:04.597 13:27:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:04.597 13:27:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:04.597 13:27:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:04.597 13:27:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:04.597 13:27:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:04.597 13:27:21 -- accel/accel.sh@40 -- # local IFS=, 00:17:04.597 13:27:21 -- accel/accel.sh@41 -- # jq -r . 00:17:04.597 [2024-04-26 13:27:21.854226] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:04.597 [2024-04-26 13:27:21.854322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63751 ] 00:17:04.597 [2024-04-26 13:27:21.987993] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.856 [2024-04-26 13:27:22.107273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val= 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val= 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=0x1 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val= 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val= 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=copy_crc32c 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=0 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val='8192 bytes' 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val= 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=software 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@22 -- # accel_module=software 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=32 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=32 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=1 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val=Yes 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val= 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:04.856 13:27:22 -- accel/accel.sh@20 -- # val= 00:17:04.856 13:27:22 -- accel/accel.sh@21 -- # case "$var" in 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # IFS=: 00:17:04.856 13:27:22 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.272 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.272 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.272 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.272 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.272 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.272 ************************************ 00:17:06.272 END TEST accel_copy_crc32c_C2 00:17:06.272 ************************************ 00:17:06.272 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:06.272 13:27:23 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:17:06.272 13:27:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:06.272 00:17:06.272 real 0m1.547s 00:17:06.272 user 0m1.336s 00:17:06.272 sys 0m0.117s 00:17:06.272 13:27:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:06.272 13:27:23 -- common/autotest_common.sh@10 -- # set +x 00:17:06.272 13:27:23 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:17:06.272 13:27:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:06.272 13:27:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:06.272 13:27:23 -- common/autotest_common.sh@10 -- # set +x 00:17:06.272 ************************************ 00:17:06.272 START TEST accel_dualcast 00:17:06.272 ************************************ 00:17:06.272 13:27:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:17:06.272 13:27:23 -- accel/accel.sh@16 -- # local accel_opc 00:17:06.272 13:27:23 -- accel/accel.sh@17 -- # local accel_module 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.272 13:27:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:17:06.272 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.272 13:27:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:17:06.272 13:27:23 -- accel/accel.sh@12 -- # build_accel_config 00:17:06.272 13:27:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:06.272 13:27:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:06.272 13:27:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:06.272 13:27:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:06.272 13:27:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:06.272 13:27:23 -- accel/accel.sh@40 -- # local IFS=, 00:17:06.272 13:27:23 -- accel/accel.sh@41 -- # jq -r . 00:17:06.272 [2024-04-26 13:27:23.527744] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:06.272 [2024-04-26 13:27:23.527842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63788 ] 00:17:06.272 [2024-04-26 13:27:23.663890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.530 [2024-04-26 13:27:23.778431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.530 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.530 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.530 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.530 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.530 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.530 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.530 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.530 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.530 13:27:23 -- accel/accel.sh@20 -- # val=0x1 00:17:06.530 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val=dualcast 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val=software 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@22 -- # accel_module=software 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val=32 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val=32 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val=1 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val=Yes 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:06.531 13:27:23 -- accel/accel.sh@20 -- # val= 00:17:06.531 13:27:23 -- accel/accel.sh@21 -- # case "$var" in 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # IFS=: 00:17:06.531 13:27:23 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:07.907 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:07.907 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:07.907 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:07.907 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:07.907 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:07.907 ************************************ 00:17:07.907 END TEST accel_dualcast 00:17:07.907 ************************************ 00:17:07.907 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:07.907 13:27:25 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:17:07.907 13:27:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:07.907 00:17:07.907 real 0m1.544s 00:17:07.907 user 0m1.332s 00:17:07.907 sys 0m0.116s 00:17:07.907 13:27:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:07.907 13:27:25 -- common/autotest_common.sh@10 -- # set +x 00:17:07.907 13:27:25 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:17:07.907 13:27:25 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:07.907 13:27:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.907 13:27:25 -- common/autotest_common.sh@10 -- # set +x 00:17:07.907 ************************************ 00:17:07.907 START TEST accel_compare 00:17:07.907 ************************************ 00:17:07.907 13:27:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:17:07.907 13:27:25 -- accel/accel.sh@16 -- # local accel_opc 00:17:07.907 13:27:25 -- accel/accel.sh@17 -- # local accel_module 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:07.907 13:27:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:17:07.907 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:07.907 13:27:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:17:07.907 13:27:25 -- accel/accel.sh@12 -- # build_accel_config 00:17:07.907 13:27:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:07.907 13:27:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:07.907 13:27:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:07.907 13:27:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:07.907 13:27:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:07.907 13:27:25 -- accel/accel.sh@40 -- # local IFS=, 00:17:07.907 13:27:25 -- accel/accel.sh@41 -- # jq -r . 00:17:07.907 [2024-04-26 13:27:25.198990] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:07.907 [2024-04-26 13:27:25.199094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63832 ] 00:17:07.907 [2024-04-26 13:27:25.338104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.167 [2024-04-26 13:27:25.472151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val=0x1 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val=compare 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@23 -- # accel_opc=compare 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val=software 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@22 -- # accel_module=software 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val=32 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val=32 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val=1 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val=Yes 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:08.167 13:27:25 -- accel/accel.sh@20 -- # val= 00:17:08.167 13:27:25 -- accel/accel.sh@21 -- # case "$var" in 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # IFS=: 00:17:08.167 13:27:25 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@20 -- # val= 00:17:09.545 13:27:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # IFS=: 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@20 -- # val= 00:17:09.545 13:27:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # IFS=: 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@20 -- # val= 00:17:09.545 13:27:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # IFS=: 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@20 -- # val= 00:17:09.545 13:27:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # IFS=: 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@20 -- # val= 00:17:09.545 13:27:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # IFS=: 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@20 -- # val= 00:17:09.545 13:27:26 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # IFS=: 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:09.545 13:27:26 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:17:09.545 13:27:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:09.545 00:17:09.545 real 0m1.566s 00:17:09.545 user 0m1.355s 00:17:09.545 sys 0m0.114s 00:17:09.545 13:27:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:09.545 13:27:26 -- common/autotest_common.sh@10 -- # set +x 00:17:09.545 ************************************ 00:17:09.545 END TEST accel_compare 00:17:09.545 ************************************ 00:17:09.545 13:27:26 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:17:09.545 13:27:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:17:09.545 13:27:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:09.545 13:27:26 -- common/autotest_common.sh@10 -- # set +x 00:17:09.545 ************************************ 00:17:09.545 START TEST accel_xor 00:17:09.545 ************************************ 00:17:09.545 13:27:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:17:09.545 13:27:26 -- accel/accel.sh@16 -- # local accel_opc 00:17:09.545 13:27:26 -- accel/accel.sh@17 -- # local accel_module 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # IFS=: 00:17:09.545 13:27:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:17:09.545 13:27:26 -- accel/accel.sh@19 -- # read -r var val 00:17:09.545 13:27:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:17:09.545 13:27:26 -- accel/accel.sh@12 -- # build_accel_config 00:17:09.545 13:27:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:09.545 13:27:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:09.545 13:27:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:09.545 13:27:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:09.545 13:27:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:09.545 13:27:26 -- accel/accel.sh@40 -- # local IFS=, 00:17:09.545 13:27:26 -- accel/accel.sh@41 -- # jq -r . 00:17:09.545 [2024-04-26 13:27:26.882454] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:09.545 [2024-04-26 13:27:26.882565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63872 ] 00:17:09.804 [2024-04-26 13:27:27.021507] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.804 [2024-04-26 13:27:27.140821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val= 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val= 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=0x1 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val= 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val= 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=xor 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@23 -- # accel_opc=xor 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=2 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val= 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=software 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@22 -- # accel_module=software 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=32 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=32 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=1 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val=Yes 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.804 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.804 13:27:27 -- accel/accel.sh@20 -- # val= 00:17:09.804 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.805 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.805 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:09.805 13:27:27 -- accel/accel.sh@20 -- # val= 00:17:09.805 13:27:27 -- accel/accel.sh@21 -- # case "$var" in 00:17:09.805 13:27:27 -- accel/accel.sh@19 -- # IFS=: 00:17:09.805 13:27:27 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.182 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.182 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.182 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.182 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.182 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.182 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:11.182 13:27:28 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:11.182 13:27:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:11.182 00:17:11.182 real 0m1.542s 00:17:11.182 user 0m1.329s 00:17:11.182 sys 0m0.120s 00:17:11.182 13:27:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:11.182 ************************************ 00:17:11.182 END TEST accel_xor 00:17:11.182 ************************************ 00:17:11.182 13:27:28 -- common/autotest_common.sh@10 -- # set +x 00:17:11.182 13:27:28 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:17:11.182 13:27:28 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:11.182 13:27:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:11.182 13:27:28 -- common/autotest_common.sh@10 -- # set +x 00:17:11.182 ************************************ 00:17:11.182 START TEST accel_xor 00:17:11.182 ************************************ 00:17:11.182 13:27:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:17:11.182 13:27:28 -- accel/accel.sh@16 -- # local accel_opc 00:17:11.182 13:27:28 -- accel/accel.sh@17 -- # local accel_module 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.182 13:27:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:17:11.182 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.182 13:27:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:17:11.182 13:27:28 -- accel/accel.sh@12 -- # build_accel_config 00:17:11.182 13:27:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:11.182 13:27:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:11.182 13:27:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:11.182 13:27:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:11.182 13:27:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:11.182 13:27:28 -- accel/accel.sh@40 -- # local IFS=, 00:17:11.182 13:27:28 -- accel/accel.sh@41 -- # jq -r . 00:17:11.182 [2024-04-26 13:27:28.557343] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:11.182 [2024-04-26 13:27:28.557453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63910 ] 00:17:11.441 [2024-04-26 13:27:28.696835] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.441 [2024-04-26 13:27:28.818451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val=0x1 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val=xor 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@23 -- # accel_opc=xor 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val=3 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val=software 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@22 -- # accel_module=software 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val=32 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.441 13:27:28 -- accel/accel.sh@20 -- # val=32 00:17:11.441 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.441 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.731 13:27:28 -- accel/accel.sh@20 -- # val=1 00:17:11.731 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.731 13:27:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:11.731 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.731 13:27:28 -- accel/accel.sh@20 -- # val=Yes 00:17:11.731 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.731 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.731 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:11.731 13:27:28 -- accel/accel.sh@20 -- # val= 00:17:11.731 13:27:28 -- accel/accel.sh@21 -- # case "$var" in 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # IFS=: 00:17:11.731 13:27:28 -- accel/accel.sh@19 -- # read -r var val 00:17:12.689 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:12.689 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:12.689 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:12.689 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:12.689 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:12.689 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:12.689 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:12.689 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:12.689 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:12.689 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:12.689 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:12.689 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:12.689 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:12.689 13:27:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:12.689 13:27:30 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:17:12.689 13:27:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:12.689 00:17:12.689 real 0m1.554s 00:17:12.689 user 0m1.334s 00:17:12.689 sys 0m0.126s 00:17:12.689 ************************************ 00:17:12.689 END TEST accel_xor 00:17:12.689 ************************************ 00:17:12.689 13:27:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:12.689 13:27:30 -- common/autotest_common.sh@10 -- # set +x 00:17:12.689 13:27:30 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:17:12.689 13:27:30 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:12.689 13:27:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:12.689 13:27:30 -- common/autotest_common.sh@10 -- # set +x 00:17:12.949 ************************************ 00:17:12.949 START TEST accel_dif_verify 00:17:12.949 ************************************ 00:17:12.949 13:27:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:17:12.949 13:27:30 -- accel/accel.sh@16 -- # local accel_opc 00:17:12.949 13:27:30 -- accel/accel.sh@17 -- # local accel_module 00:17:12.949 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:12.949 13:27:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:17:12.949 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:12.949 13:27:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:17:12.949 13:27:30 -- accel/accel.sh@12 -- # build_accel_config 00:17:12.949 13:27:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:12.949 13:27:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:12.949 13:27:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:12.949 13:27:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:12.949 13:27:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:12.949 13:27:30 -- accel/accel.sh@40 -- # local IFS=, 00:17:12.949 13:27:30 -- accel/accel.sh@41 -- # jq -r . 00:17:12.949 [2024-04-26 13:27:30.221870] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:12.949 [2024-04-26 13:27:30.221954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63954 ] 00:17:12.949 [2024-04-26 13:27:30.361582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.208 [2024-04-26 13:27:30.476694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val=0x1 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val=dif_verify 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val='512 bytes' 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val='8 bytes' 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val=software 00:17:13.208 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.208 13:27:30 -- accel/accel.sh@22 -- # accel_module=software 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.208 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.208 13:27:30 -- accel/accel.sh@20 -- # val=32 00:17:13.209 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.209 13:27:30 -- accel/accel.sh@20 -- # val=32 00:17:13.209 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.209 13:27:30 -- accel/accel.sh@20 -- # val=1 00:17:13.209 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.209 13:27:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:13.209 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.209 13:27:30 -- accel/accel.sh@20 -- # val=No 00:17:13.209 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.209 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:13.209 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:13.209 13:27:30 -- accel/accel.sh@20 -- # val= 00:17:13.209 13:27:30 -- accel/accel.sh@21 -- # case "$var" in 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # IFS=: 00:17:13.209 13:27:30 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@20 -- # val= 00:17:14.586 13:27:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # IFS=: 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@20 -- # val= 00:17:14.586 13:27:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # IFS=: 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@20 -- # val= 00:17:14.586 13:27:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # IFS=: 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@20 -- # val= 00:17:14.586 13:27:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # IFS=: 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@20 -- # val= 00:17:14.586 13:27:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # IFS=: 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@20 -- # val= 00:17:14.586 13:27:31 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # IFS=: 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:14.586 13:27:31 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:17:14.586 13:27:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:14.586 00:17:14.586 real 0m1.539s 00:17:14.586 user 0m1.327s 00:17:14.586 sys 0m0.114s 00:17:14.586 13:27:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.586 13:27:31 -- common/autotest_common.sh@10 -- # set +x 00:17:14.586 ************************************ 00:17:14.586 END TEST accel_dif_verify 00:17:14.586 ************************************ 00:17:14.586 13:27:31 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:17:14.586 13:27:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:14.586 13:27:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.586 13:27:31 -- common/autotest_common.sh@10 -- # set +x 00:17:14.586 ************************************ 00:17:14.586 START TEST accel_dif_generate 00:17:14.586 ************************************ 00:17:14.586 13:27:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:17:14.586 13:27:31 -- accel/accel.sh@16 -- # local accel_opc 00:17:14.586 13:27:31 -- accel/accel.sh@17 -- # local accel_module 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # IFS=: 00:17:14.586 13:27:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:17:14.586 13:27:31 -- accel/accel.sh@19 -- # read -r var val 00:17:14.586 13:27:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:17:14.586 13:27:31 -- accel/accel.sh@12 -- # build_accel_config 00:17:14.586 13:27:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:14.586 13:27:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:14.586 13:27:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:14.586 13:27:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:14.586 13:27:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:14.586 13:27:31 -- accel/accel.sh@40 -- # local IFS=, 00:17:14.586 13:27:31 -- accel/accel.sh@41 -- # jq -r . 00:17:14.586 [2024-04-26 13:27:31.889467] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:14.586 [2024-04-26 13:27:31.889878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63993 ] 00:17:14.586 [2024-04-26 13:27:32.029608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.857 [2024-04-26 13:27:32.142580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val= 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val= 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val=0x1 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val= 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val= 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val=dif_generate 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val='512 bytes' 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val='8 bytes' 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val= 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val=software 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@22 -- # accel_module=software 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val=32 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val=32 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val=1 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val=No 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val= 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:14.857 13:27:32 -- accel/accel.sh@20 -- # val= 00:17:14.857 13:27:32 -- accel/accel.sh@21 -- # case "$var" in 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # IFS=: 00:17:14.857 13:27:32 -- accel/accel.sh@19 -- # read -r var val 00:17:16.246 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.246 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.246 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.246 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.246 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.246 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.246 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.246 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.246 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.246 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.246 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.246 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.246 13:27:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:16.246 13:27:33 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:17:16.246 13:27:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.246 ************************************ 00:17:16.246 END TEST accel_dif_generate 00:17:16.246 ************************************ 00:17:16.246 00:17:16.246 real 0m1.543s 00:17:16.246 user 0m1.329s 00:17:16.246 sys 0m0.115s 00:17:16.246 13:27:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:16.246 13:27:33 -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 13:27:33 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:17:16.246 13:27:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:16.246 13:27:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.246 13:27:33 -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 ************************************ 00:17:16.246 START TEST accel_dif_generate_copy 00:17:16.246 ************************************ 00:17:16.246 13:27:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:17:16.246 13:27:33 -- accel/accel.sh@16 -- # local accel_opc 00:17:16.246 13:27:33 -- accel/accel.sh@17 -- # local accel_module 00:17:16.246 13:27:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:17:16.246 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.247 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.247 13:27:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:17:16.247 13:27:33 -- accel/accel.sh@12 -- # build_accel_config 00:17:16.247 13:27:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:16.247 13:27:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:16.247 13:27:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:16.247 13:27:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:16.247 13:27:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:16.247 13:27:33 -- accel/accel.sh@40 -- # local IFS=, 00:17:16.247 13:27:33 -- accel/accel.sh@41 -- # jq -r . 00:17:16.247 [2024-04-26 13:27:33.553858] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:16.247 [2024-04-26 13:27:33.553941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64031 ] 00:17:16.247 [2024-04-26 13:27:33.693613] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.505 [2024-04-26 13:27:33.811864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val=0x1 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val=software 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@22 -- # accel_module=software 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val=32 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val=32 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val=1 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val=No 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:16.506 13:27:33 -- accel/accel.sh@20 -- # val= 00:17:16.506 13:27:33 -- accel/accel.sh@21 -- # case "$var" in 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # IFS=: 00:17:16.506 13:27:33 -- accel/accel.sh@19 -- # read -r var val 00:17:17.881 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:17.881 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:17.881 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:17.881 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:17.881 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:17.881 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:17.881 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:17.881 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:17.881 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:17.881 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:17.881 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:17.881 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:17.881 13:27:35 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:17.881 13:27:35 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:17:17.881 13:27:35 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:17.881 00:17:17.881 real 0m1.543s 00:17:17.881 user 0m1.323s 00:17:17.881 sys 0m0.123s 00:17:17.881 13:27:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.881 13:27:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.881 ************************************ 00:17:17.881 END TEST accel_dif_generate_copy 00:17:17.881 ************************************ 00:17:17.881 13:27:35 -- accel/accel.sh@115 -- # [[ y == y ]] 00:17:17.881 13:27:35 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:17.881 13:27:35 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:17:17.881 13:27:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.881 13:27:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.881 ************************************ 00:17:17.881 START TEST accel_comp 00:17:17.881 ************************************ 00:17:17.881 13:27:35 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:17.881 13:27:35 -- accel/accel.sh@16 -- # local accel_opc 00:17:17.881 13:27:35 -- accel/accel.sh@17 -- # local accel_module 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:17.881 13:27:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:17.881 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:17.882 13:27:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:17.882 13:27:35 -- accel/accel.sh@12 -- # build_accel_config 00:17:17.882 13:27:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:17.882 13:27:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:17.882 13:27:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:17.882 13:27:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:17.882 13:27:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:17.882 13:27:35 -- accel/accel.sh@40 -- # local IFS=, 00:17:17.882 13:27:35 -- accel/accel.sh@41 -- # jq -r . 00:17:17.882 [2024-04-26 13:27:35.206745] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:17.882 [2024-04-26 13:27:35.206860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64070 ] 00:17:18.205 [2024-04-26 13:27:35.345750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.205 [2024-04-26 13:27:35.458789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.205 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.205 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.205 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.205 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.205 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.205 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.205 13:27:35 -- accel/accel.sh@20 -- # val=0x1 00:17:18.205 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.205 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.205 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.205 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.205 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.205 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.205 13:27:35 -- accel/accel.sh@20 -- # val=compress 00:17:18.205 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.205 13:27:35 -- accel/accel.sh@23 -- # accel_opc=compress 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val=software 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@22 -- # accel_module=software 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val=32 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val=32 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val=1 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val=No 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:18.206 13:27:35 -- accel/accel.sh@20 -- # val= 00:17:18.206 13:27:35 -- accel/accel.sh@21 -- # case "$var" in 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # IFS=: 00:17:18.206 13:27:35 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@20 -- # val= 00:17:19.581 13:27:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # IFS=: 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@20 -- # val= 00:17:19.581 13:27:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # IFS=: 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@20 -- # val= 00:17:19.581 13:27:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # IFS=: 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@20 -- # val= 00:17:19.581 13:27:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # IFS=: 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@20 -- # val= 00:17:19.581 13:27:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # IFS=: 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@20 -- # val= 00:17:19.581 13:27:36 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # IFS=: 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:19.581 13:27:36 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:17:19.581 13:27:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:19.581 00:17:19.581 real 0m1.533s 00:17:19.581 user 0m1.325s 00:17:19.581 sys 0m0.114s 00:17:19.581 13:27:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:19.581 13:27:36 -- common/autotest_common.sh@10 -- # set +x 00:17:19.581 ************************************ 00:17:19.581 END TEST accel_comp 00:17:19.581 ************************************ 00:17:19.581 13:27:36 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:19.581 13:27:36 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:17:19.581 13:27:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:19.581 13:27:36 -- common/autotest_common.sh@10 -- # set +x 00:17:19.581 ************************************ 00:17:19.581 START TEST accel_decomp 00:17:19.581 ************************************ 00:17:19.581 13:27:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:19.581 13:27:36 -- accel/accel.sh@16 -- # local accel_opc 00:17:19.581 13:27:36 -- accel/accel.sh@17 -- # local accel_module 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # IFS=: 00:17:19.581 13:27:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:19.581 13:27:36 -- accel/accel.sh@19 -- # read -r var val 00:17:19.581 13:27:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:17:19.581 13:27:36 -- accel/accel.sh@12 -- # build_accel_config 00:17:19.581 13:27:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:19.581 13:27:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:19.581 13:27:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:19.581 13:27:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:19.581 13:27:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:19.581 13:27:36 -- accel/accel.sh@40 -- # local IFS=, 00:17:19.581 13:27:36 -- accel/accel.sh@41 -- # jq -r . 00:17:19.581 [2024-04-26 13:27:36.870604] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:19.581 [2024-04-26 13:27:36.870696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64108 ] 00:17:19.581 [2024-04-26 13:27:37.007838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.841 [2024-04-26 13:27:37.114003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=0x1 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=decompress 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=software 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@22 -- # accel_module=software 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=32 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=32 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=1 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val=Yes 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:19.841 13:27:37 -- accel/accel.sh@20 -- # val= 00:17:19.841 13:27:37 -- accel/accel.sh@21 -- # case "$var" in 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # IFS=: 00:17:19.841 13:27:37 -- accel/accel.sh@19 -- # read -r var val 00:17:21.218 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.218 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.218 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.218 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.218 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.218 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.218 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.218 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.218 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.218 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.218 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.218 ************************************ 00:17:21.218 END TEST accel_decomp 00:17:21.218 ************************************ 00:17:21.218 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.218 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.218 13:27:38 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:21.218 13:27:38 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:21.218 13:27:38 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:21.218 00:17:21.218 real 0m1.524s 00:17:21.218 user 0m1.322s 00:17:21.218 sys 0m0.108s 00:17:21.218 13:27:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:21.218 13:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:21.218 13:27:38 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:21.218 13:27:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:21.218 13:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:21.218 13:27:38 -- common/autotest_common.sh@10 -- # set +x 00:17:21.219 ************************************ 00:17:21.219 START TEST accel_decmop_full 00:17:21.219 ************************************ 00:17:21.219 13:27:38 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:21.219 13:27:38 -- accel/accel.sh@16 -- # local accel_opc 00:17:21.219 13:27:38 -- accel/accel.sh@17 -- # local accel_module 00:17:21.219 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.219 13:27:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:21.219 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.219 13:27:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:17:21.219 13:27:38 -- accel/accel.sh@12 -- # build_accel_config 00:17:21.219 13:27:38 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:21.219 13:27:38 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:21.219 13:27:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:21.219 13:27:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:21.219 13:27:38 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:21.219 13:27:38 -- accel/accel.sh@40 -- # local IFS=, 00:17:21.219 13:27:38 -- accel/accel.sh@41 -- # jq -r . 00:17:21.219 [2024-04-26 13:27:38.521332] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:21.219 [2024-04-26 13:27:38.521417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64147 ] 00:17:21.219 [2024-04-26 13:27:38.660324] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.478 [2024-04-26 13:27:38.775480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=0x1 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=decompress 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=software 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@22 -- # accel_module=software 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=32 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=32 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=1 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val=Yes 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:21.478 13:27:38 -- accel/accel.sh@20 -- # val= 00:17:21.478 13:27:38 -- accel/accel.sh@21 -- # case "$var" in 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # IFS=: 00:17:21.478 13:27:38 -- accel/accel.sh@19 -- # read -r var val 00:17:22.855 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:22.855 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:22.855 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:22.855 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:22.855 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:22.855 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:22.855 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:22.855 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:22.855 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:22.855 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:22.855 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:22.855 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:22.855 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:22.855 ************************************ 00:17:22.855 END TEST accel_decmop_full 00:17:22.855 ************************************ 00:17:22.855 13:27:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:22.855 13:27:40 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:22.855 13:27:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:22.855 00:17:22.855 real 0m1.543s 00:17:22.855 user 0m1.330s 00:17:22.855 sys 0m0.119s 00:17:22.855 13:27:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:22.855 13:27:40 -- common/autotest_common.sh@10 -- # set +x 00:17:22.855 13:27:40 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:22.855 13:27:40 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:22.855 13:27:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.855 13:27:40 -- common/autotest_common.sh@10 -- # set +x 00:17:22.855 ************************************ 00:17:22.855 START TEST accel_decomp_mcore 00:17:22.855 ************************************ 00:17:22.855 13:27:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:22.855 13:27:40 -- accel/accel.sh@16 -- # local accel_opc 00:17:22.856 13:27:40 -- accel/accel.sh@17 -- # local accel_module 00:17:22.856 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:22.856 13:27:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:22.856 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:22.856 13:27:40 -- accel/accel.sh@12 -- # build_accel_config 00:17:22.856 13:27:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:17:22.856 13:27:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:22.856 13:27:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:22.856 13:27:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:22.856 13:27:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:22.856 13:27:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:22.856 13:27:40 -- accel/accel.sh@40 -- # local IFS=, 00:17:22.856 13:27:40 -- accel/accel.sh@41 -- # jq -r . 00:17:22.856 [2024-04-26 13:27:40.180732] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:22.856 [2024-04-26 13:27:40.180856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64191 ] 00:17:23.114 [2024-04-26 13:27:40.321943] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:23.114 [2024-04-26 13:27:40.451309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.114 [2024-04-26 13:27:40.451374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.114 [2024-04-26 13:27:40.451476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.114 [2024-04-26 13:27:40.451478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val=0xf 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val=decompress 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:23.114 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.114 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.114 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val=software 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@22 -- # accel_module=software 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val=32 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val=32 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val=1 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val=Yes 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:23.115 13:27:40 -- accel/accel.sh@20 -- # val= 00:17:23.115 13:27:40 -- accel/accel.sh@21 -- # case "$var" in 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # IFS=: 00:17:23.115 13:27:40 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@20 -- # val= 00:17:24.489 13:27:41 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:24.489 13:27:41 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:24.489 13:27:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:24.489 00:17:24.489 real 0m1.566s 00:17:24.489 user 0m4.745s 00:17:24.489 sys 0m0.137s 00:17:24.489 13:27:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:24.489 ************************************ 00:17:24.489 END TEST accel_decomp_mcore 00:17:24.489 ************************************ 00:17:24.489 13:27:41 -- common/autotest_common.sh@10 -- # set +x 00:17:24.489 13:27:41 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:24.489 13:27:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:24.489 13:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:24.489 13:27:41 -- common/autotest_common.sh@10 -- # set +x 00:17:24.489 ************************************ 00:17:24.489 START TEST accel_decomp_full_mcore 00:17:24.489 ************************************ 00:17:24.489 13:27:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:24.489 13:27:41 -- accel/accel.sh@16 -- # local accel_opc 00:17:24.489 13:27:41 -- accel/accel.sh@17 -- # local accel_module 00:17:24.489 13:27:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # IFS=: 00:17:24.489 13:27:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:17:24.489 13:27:41 -- accel/accel.sh@19 -- # read -r var val 00:17:24.489 13:27:41 -- accel/accel.sh@12 -- # build_accel_config 00:17:24.489 13:27:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:24.489 13:27:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:24.489 13:27:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:24.489 13:27:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:24.489 13:27:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:24.489 13:27:41 -- accel/accel.sh@40 -- # local IFS=, 00:17:24.489 13:27:41 -- accel/accel.sh@41 -- # jq -r . 00:17:24.489 [2024-04-26 13:27:41.851794] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:24.489 [2024-04-26 13:27:41.851886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64234 ] 00:17:24.747 [2024-04-26 13:27:41.995474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.747 [2024-04-26 13:27:42.123077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.747 [2024-04-26 13:27:42.123236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.747 [2024-04-26 13:27:42.123310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.747 [2024-04-26 13:27:42.123605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.747 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:24.747 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.747 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:24.747 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.747 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:24.747 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.747 13:27:42 -- accel/accel.sh@20 -- # val=0xf 00:17:24.747 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.747 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:24.747 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.747 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:24.747 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.747 13:27:42 -- accel/accel.sh@20 -- # val=decompress 00:17:24.747 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.747 13:27:42 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.747 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.748 13:27:42 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:24.748 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.748 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:24.748 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.748 13:27:42 -- accel/accel.sh@20 -- # val=software 00:17:24.748 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.748 13:27:42 -- accel/accel.sh@22 -- # accel_module=software 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.748 13:27:42 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:24.748 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.748 13:27:42 -- accel/accel.sh@20 -- # val=32 00:17:24.748 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.748 13:27:42 -- accel/accel.sh@20 -- # val=32 00:17:24.748 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:24.748 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:24.748 13:27:42 -- accel/accel.sh@20 -- # val=1 00:17:25.006 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:25.006 13:27:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:25.006 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:25.006 13:27:42 -- accel/accel.sh@20 -- # val=Yes 00:17:25.006 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:25.006 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:25.006 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:25.006 13:27:42 -- accel/accel.sh@20 -- # val= 00:17:25.006 13:27:42 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # IFS=: 00:17:25.006 13:27:42 -- accel/accel.sh@19 -- # read -r var val 00:17:25.977 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.977 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.977 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.977 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.977 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.977 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.977 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.977 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.977 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.977 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.977 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.977 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.977 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.977 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.978 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.978 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.978 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.978 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.978 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.978 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.978 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.978 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.978 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:25.978 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:25.978 ************************************ 00:17:25.978 END TEST accel_decomp_full_mcore 00:17:25.978 ************************************ 00:17:25.978 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:25.978 13:27:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:25.978 13:27:43 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:25.978 13:27:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:25.978 00:17:25.978 real 0m1.578s 00:17:25.978 user 0m4.787s 00:17:25.978 sys 0m0.144s 00:17:25.978 13:27:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:25.978 13:27:43 -- common/autotest_common.sh@10 -- # set +x 00:17:26.236 13:27:43 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:26.236 13:27:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:17:26.236 13:27:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.236 13:27:43 -- common/autotest_common.sh@10 -- # set +x 00:17:26.236 ************************************ 00:17:26.236 START TEST accel_decomp_mthread 00:17:26.236 ************************************ 00:17:26.236 13:27:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:26.236 13:27:43 -- accel/accel.sh@16 -- # local accel_opc 00:17:26.236 13:27:43 -- accel/accel.sh@17 -- # local accel_module 00:17:26.236 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.236 13:27:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:26.236 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.236 13:27:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:17:26.236 13:27:43 -- accel/accel.sh@12 -- # build_accel_config 00:17:26.236 13:27:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:26.236 13:27:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:26.236 13:27:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:26.236 13:27:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:26.236 13:27:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:26.236 13:27:43 -- accel/accel.sh@40 -- # local IFS=, 00:17:26.236 13:27:43 -- accel/accel.sh@41 -- # jq -r . 00:17:26.236 [2024-04-26 13:27:43.549573] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:26.236 [2024-04-26 13:27:43.549670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64276 ] 00:17:26.495 [2024-04-26 13:27:43.688212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.495 [2024-04-26 13:27:43.803751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=0x1 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=decompress 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=software 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@22 -- # accel_module=software 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=32 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=32 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=2 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val=Yes 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:26.495 13:27:43 -- accel/accel.sh@20 -- # val= 00:17:26.495 13:27:43 -- accel/accel.sh@21 -- # case "$var" in 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # IFS=: 00:17:26.495 13:27:43 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:27.869 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:27.869 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:27.869 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:27.869 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:27.869 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:27.869 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.869 ************************************ 00:17:27.869 END TEST accel_decomp_mthread 00:17:27.869 ************************************ 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:27.869 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.869 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.869 13:27:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:27.869 13:27:45 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:27.869 13:27:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:27.869 00:17:27.869 real 0m1.533s 00:17:27.869 user 0m1.319s 00:17:27.869 sys 0m0.113s 00:17:27.869 13:27:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:27.869 13:27:45 -- common/autotest_common.sh@10 -- # set +x 00:17:27.869 13:27:45 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:27.869 13:27:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:17:27.869 13:27:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:27.869 13:27:45 -- common/autotest_common.sh@10 -- # set +x 00:17:27.869 ************************************ 00:17:27.869 START TEST accel_deomp_full_mthread 00:17:27.869 ************************************ 00:17:27.870 13:27:45 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:27.870 13:27:45 -- accel/accel.sh@16 -- # local accel_opc 00:17:27.870 13:27:45 -- accel/accel.sh@17 -- # local accel_module 00:17:27.870 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:27.870 13:27:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:27.870 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:27.870 13:27:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:17:27.870 13:27:45 -- accel/accel.sh@12 -- # build_accel_config 00:17:27.870 13:27:45 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:27.870 13:27:45 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:27.870 13:27:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:27.870 13:27:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:27.870 13:27:45 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:27.870 13:27:45 -- accel/accel.sh@40 -- # local IFS=, 00:17:27.870 13:27:45 -- accel/accel.sh@41 -- # jq -r . 00:17:27.870 [2024-04-26 13:27:45.199237] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:27.870 [2024-04-26 13:27:45.199365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64321 ] 00:17:28.238 [2024-04-26 13:27:45.334055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.238 [2024-04-26 13:27:45.444443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=0x1 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=decompress 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@23 -- # accel_opc=decompress 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val='111250 bytes' 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=software 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@22 -- # accel_module=software 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=32 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=32 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=2 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val=Yes 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:28.238 13:27:45 -- accel/accel.sh@20 -- # val= 00:17:28.238 13:27:45 -- accel/accel.sh@21 -- # case "$var" in 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # IFS=: 00:17:28.238 13:27:45 -- accel/accel.sh@19 -- # read -r var val 00:17:29.610 13:27:46 -- accel/accel.sh@20 -- # val= 00:17:29.610 13:27:46 -- accel/accel.sh@21 -- # case "$var" in 00:17:29.610 13:27:46 -- accel/accel.sh@19 -- # IFS=: 00:17:29.610 13:27:46 -- accel/accel.sh@19 -- # read -r var val 00:17:29.610 13:27:46 -- accel/accel.sh@20 -- # val= 00:17:29.610 13:27:46 -- accel/accel.sh@21 -- # case "$var" in 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # IFS=: 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # read -r var val 00:17:29.611 13:27:46 -- accel/accel.sh@20 -- # val= 00:17:29.611 13:27:46 -- accel/accel.sh@21 -- # case "$var" in 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # IFS=: 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # read -r var val 00:17:29.611 13:27:46 -- accel/accel.sh@20 -- # val= 00:17:29.611 13:27:46 -- accel/accel.sh@21 -- # case "$var" in 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # IFS=: 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # read -r var val 00:17:29.611 13:27:46 -- accel/accel.sh@20 -- # val= 00:17:29.611 13:27:46 -- accel/accel.sh@21 -- # case "$var" in 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # IFS=: 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # read -r var val 00:17:29.611 13:27:46 -- accel/accel.sh@20 -- # val= 00:17:29.611 13:27:46 -- accel/accel.sh@21 -- # case "$var" in 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # IFS=: 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # read -r var val 00:17:29.611 13:27:46 -- accel/accel.sh@20 -- # val= 00:17:29.611 13:27:46 -- accel/accel.sh@21 -- # case "$var" in 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # IFS=: 00:17:29.611 13:27:46 -- accel/accel.sh@19 -- # read -r var val 00:17:29.611 13:27:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:17:29.611 13:27:46 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:17:29.611 13:27:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:29.611 00:17:29.611 real 0m1.555s 00:17:29.611 user 0m1.344s 00:17:29.611 sys 0m0.119s 00:17:29.611 13:27:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:29.611 13:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.611 ************************************ 00:17:29.611 END TEST accel_deomp_full_mthread 00:17:29.611 ************************************ 00:17:29.611 13:27:46 -- accel/accel.sh@124 -- # [[ n == y ]] 00:17:29.611 13:27:46 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:17:29.611 13:27:46 -- accel/accel.sh@137 -- # build_accel_config 00:17:29.611 13:27:46 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:29.611 13:27:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:29.611 13:27:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:17:29.611 13:27:46 -- common/autotest_common.sh@10 -- # set +x 00:17:29.611 13:27:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:17:29.611 13:27:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:17:29.611 13:27:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:17:29.611 13:27:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:17:29.611 13:27:46 -- accel/accel.sh@40 -- # local IFS=, 00:17:29.611 13:27:46 -- accel/accel.sh@41 -- # jq -r . 00:17:29.611 ************************************ 00:17:29.611 START TEST accel_dif_functional_tests 00:17:29.611 ************************************ 00:17:29.611 13:27:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:17:29.611 [2024-04-26 13:27:46.907760] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:29.611 [2024-04-26 13:27:46.907898] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64355 ] 00:17:29.611 [2024-04-26 13:27:47.046390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.869 [2024-04-26 13:27:47.162133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.869 [2024-04-26 13:27:47.162294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.869 [2024-04-26 13:27:47.162295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.869 00:17:29.869 00:17:29.869 CUnit - A unit testing framework for C - Version 2.1-3 00:17:29.869 http://cunit.sourceforge.net/ 00:17:29.869 00:17:29.869 00:17:29.869 Suite: accel_dif 00:17:29.869 Test: verify: DIF generated, GUARD check ...passed 00:17:29.869 Test: verify: DIF generated, APPTAG check ...passed 00:17:29.869 Test: verify: DIF generated, REFTAG check ...passed 00:17:29.869 Test: verify: DIF not generated, GUARD check ...passed 00:17:29.869 Test: verify: DIF not generated, APPTAG check ...passed 00:17:29.869 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 13:27:47.258265] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:29.869 [2024-04-26 13:27:47.258472] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:17:29.869 [2024-04-26 13:27:47.258518] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:29.869 [2024-04-26 13:27:47.258550] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:17:29.869 [2024-04-26 13:27:47.258588] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:29.869 passed 00:17:29.869 Test: verify: APPTAG correct, APPTAG check ...[2024-04-26 13:27:47.258865] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:17:29.869 passed 00:17:29.869 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:17:29.869 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:17:29.869 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:17:29.869 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-04-26 13:27:47.258970] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:17:29.869 passed 00:17:29.869 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:17:29.869 Test: generate copy: DIF generated, GUARD check ...[2024-04-26 13:27:47.259213] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:17:29.869 passed 00:17:29.869 Test: generate copy: DIF generated, APTTAG check ...passed 00:17:29.869 Test: generate copy: DIF generated, REFTAG check ...passed 00:17:29.869 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:17:29.869 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:17:29.869 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:17:29.869 Test: generate copy: iovecs-len validate ...passed 00:17:29.869 Test: generate copy: buffer alignment validate ...[2024-04-26 13:27:47.259683] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:17:29.869 passed 00:17:29.869 00:17:29.869 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.869 suites 1 1 n/a 0 0 00:17:29.869 tests 20 20 20 0 0 00:17:29.869 asserts 204 204 204 0 n/a 00:17:29.869 00:17:29.869 Elapsed time = 0.005 seconds 00:17:30.126 00:17:30.126 real 0m0.658s 00:17:30.126 user 0m0.809s 00:17:30.126 sys 0m0.162s 00:17:30.126 13:27:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:30.126 13:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:30.126 ************************************ 00:17:30.126 END TEST accel_dif_functional_tests 00:17:30.126 ************************************ 00:17:30.126 00:17:30.126 real 0m37.532s 00:17:30.126 user 0m38.102s 00:17:30.126 sys 0m4.874s 00:17:30.126 ************************************ 00:17:30.126 END TEST accel 00:17:30.126 ************************************ 00:17:30.126 13:27:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:30.126 13:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:30.386 13:27:47 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:30.386 13:27:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:30.386 13:27:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:30.386 13:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:30.386 ************************************ 00:17:30.386 START TEST accel_rpc 00:17:30.386 ************************************ 00:17:30.386 13:27:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:17:30.386 * Looking for test storage... 00:17:30.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:17:30.386 13:27:47 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:30.386 13:27:47 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64430 00:17:30.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.386 13:27:47 -- accel/accel_rpc.sh@15 -- # waitforlisten 64430 00:17:30.386 13:27:47 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:30.386 13:27:47 -- common/autotest_common.sh@817 -- # '[' -z 64430 ']' 00:17:30.386 13:27:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.386 13:27:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.386 13:27:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.386 13:27:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.386 13:27:47 -- common/autotest_common.sh@10 -- # set +x 00:17:30.386 [2024-04-26 13:27:47.799640] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:30.386 [2024-04-26 13:27:47.799754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64430 ] 00:17:30.644 [2024-04-26 13:27:47.936618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.644 [2024-04-26 13:27:48.054552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.580 13:27:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.580 13:27:48 -- common/autotest_common.sh@850 -- # return 0 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:17:31.580 13:27:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:31.580 13:27:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:31.580 13:27:48 -- common/autotest_common.sh@10 -- # set +x 00:17:31.580 ************************************ 00:17:31.580 START TEST accel_assign_opcode 00:17:31.580 ************************************ 00:17:31.580 13:27:48 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:17:31.580 13:27:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.580 13:27:48 -- common/autotest_common.sh@10 -- # set +x 00:17:31.580 [2024-04-26 13:27:48.839162] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:17:31.580 13:27:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:17:31.580 13:27:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.580 13:27:48 -- common/autotest_common.sh@10 -- # set +x 00:17:31.580 [2024-04-26 13:27:48.847202] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:17:31.580 13:27:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.580 13:27:48 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:17:31.580 13:27:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.580 13:27:48 -- common/autotest_common.sh@10 -- # set +x 00:17:31.839 13:27:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.839 13:27:49 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:17:31.839 13:27:49 -- accel/accel_rpc.sh@42 -- # grep software 00:17:31.839 13:27:49 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:17:31.839 13:27:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.839 13:27:49 -- common/autotest_common.sh@10 -- # set +x 00:17:31.839 13:27:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.839 software 00:17:31.839 ************************************ 00:17:31.839 END TEST accel_assign_opcode 00:17:31.839 ************************************ 00:17:31.839 00:17:31.839 real 0m0.305s 00:17:31.839 user 0m0.058s 00:17:31.839 sys 0m0.008s 00:17:31.839 13:27:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:31.839 13:27:49 -- common/autotest_common.sh@10 -- # set +x 00:17:31.839 13:27:49 -- accel/accel_rpc.sh@55 -- # killprocess 64430 00:17:31.839 13:27:49 -- common/autotest_common.sh@936 -- # '[' -z 64430 ']' 00:17:31.839 13:27:49 -- common/autotest_common.sh@940 -- # kill -0 64430 00:17:31.839 13:27:49 -- common/autotest_common.sh@941 -- # uname 00:17:31.839 13:27:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.839 13:27:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64430 00:17:31.839 killing process with pid 64430 00:17:31.839 13:27:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.839 13:27:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.839 13:27:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64430' 00:17:31.839 13:27:49 -- common/autotest_common.sh@955 -- # kill 64430 00:17:31.839 13:27:49 -- common/autotest_common.sh@960 -- # wait 64430 00:17:32.408 ************************************ 00:17:32.408 END TEST accel_rpc 00:17:32.408 ************************************ 00:17:32.408 00:17:32.408 real 0m1.985s 00:17:32.408 user 0m2.091s 00:17:32.408 sys 0m0.471s 00:17:32.408 13:27:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.408 13:27:49 -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 13:27:49 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:32.408 13:27:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:32.408 13:27:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.408 13:27:49 -- common/autotest_common.sh@10 -- # set +x 00:17:32.408 ************************************ 00:17:32.408 START TEST app_cmdline 00:17:32.408 ************************************ 00:17:32.408 13:27:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:17:32.408 * Looking for test storage... 00:17:32.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:32.408 13:27:49 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:17:32.408 13:27:49 -- app/cmdline.sh@17 -- # spdk_tgt_pid=64551 00:17:32.408 13:27:49 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:17:32.408 13:27:49 -- app/cmdline.sh@18 -- # waitforlisten 64551 00:17:32.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.408 13:27:49 -- common/autotest_common.sh@817 -- # '[' -z 64551 ']' 00:17:32.408 13:27:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.408 13:27:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:32.408 13:27:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.408 13:27:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:32.408 13:27:49 -- common/autotest_common.sh@10 -- # set +x 00:17:32.667 [2024-04-26 13:27:49.907825] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:32.667 [2024-04-26 13:27:49.907949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64551 ] 00:17:32.667 [2024-04-26 13:27:50.050155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.926 [2024-04-26 13:27:50.165289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.493 13:27:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:33.493 13:27:50 -- common/autotest_common.sh@850 -- # return 0 00:17:33.493 13:27:50 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:17:33.752 { 00:17:33.752 "fields": { 00:17:33.752 "commit": "f93182c78", 00:17:33.752 "major": 24, 00:17:33.752 "minor": 5, 00:17:33.752 "patch": 0, 00:17:33.752 "suffix": "-pre" 00:17:33.752 }, 00:17:33.752 "version": "SPDK v24.05-pre git sha1 f93182c78" 00:17:33.752 } 00:17:33.752 13:27:51 -- app/cmdline.sh@22 -- # expected_methods=() 00:17:33.752 13:27:51 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:17:33.752 13:27:51 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:17:33.752 13:27:51 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:17:33.752 13:27:51 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:17:33.752 13:27:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.752 13:27:51 -- common/autotest_common.sh@10 -- # set +x 00:17:33.752 13:27:51 -- app/cmdline.sh@26 -- # sort 00:17:33.752 13:27:51 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:17:33.752 13:27:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.011 13:27:51 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:17:34.011 13:27:51 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:17:34.011 13:27:51 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:34.011 13:27:51 -- common/autotest_common.sh@638 -- # local es=0 00:17:34.011 13:27:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:34.011 13:27:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.011 13:27:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:34.011 13:27:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.011 13:27:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:34.011 13:27:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.011 13:27:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:34.011 13:27:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.011 13:27:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:34.011 13:27:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:17:34.269 2024/04/26 13:27:51 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:17:34.269 request: 00:17:34.269 { 00:17:34.269 "method": "env_dpdk_get_mem_stats", 00:17:34.269 "params": {} 00:17:34.269 } 00:17:34.269 Got JSON-RPC error response 00:17:34.269 GoRPCClient: error on JSON-RPC call 00:17:34.269 13:27:51 -- common/autotest_common.sh@641 -- # es=1 00:17:34.269 13:27:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:34.269 13:27:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:34.269 13:27:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:34.269 13:27:51 -- app/cmdline.sh@1 -- # killprocess 64551 00:17:34.269 13:27:51 -- common/autotest_common.sh@936 -- # '[' -z 64551 ']' 00:17:34.269 13:27:51 -- common/autotest_common.sh@940 -- # kill -0 64551 00:17:34.269 13:27:51 -- common/autotest_common.sh@941 -- # uname 00:17:34.269 13:27:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.269 13:27:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64551 00:17:34.269 13:27:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.269 13:27:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.269 killing process with pid 64551 00:17:34.269 13:27:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64551' 00:17:34.269 13:27:51 -- common/autotest_common.sh@955 -- # kill 64551 00:17:34.269 13:27:51 -- common/autotest_common.sh@960 -- # wait 64551 00:17:34.837 ************************************ 00:17:34.837 END TEST app_cmdline 00:17:34.837 ************************************ 00:17:34.837 00:17:34.837 real 0m2.232s 00:17:34.837 user 0m2.796s 00:17:34.837 sys 0m0.517s 00:17:34.837 13:27:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:34.837 13:27:51 -- common/autotest_common.sh@10 -- # set +x 00:17:34.837 13:27:52 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:34.837 13:27:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:34.837 13:27:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.837 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:17:34.837 ************************************ 00:17:34.837 START TEST version 00:17:34.837 ************************************ 00:17:34.837 13:27:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:17:34.837 * Looking for test storage... 00:17:34.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:17:34.837 13:27:52 -- app/version.sh@17 -- # get_header_version major 00:17:34.837 13:27:52 -- app/version.sh@14 -- # cut -f2 00:17:34.837 13:27:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:34.837 13:27:52 -- app/version.sh@14 -- # tr -d '"' 00:17:34.837 13:27:52 -- app/version.sh@17 -- # major=24 00:17:34.837 13:27:52 -- app/version.sh@18 -- # get_header_version minor 00:17:34.837 13:27:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:34.837 13:27:52 -- app/version.sh@14 -- # cut -f2 00:17:34.837 13:27:52 -- app/version.sh@14 -- # tr -d '"' 00:17:34.837 13:27:52 -- app/version.sh@18 -- # minor=5 00:17:34.837 13:27:52 -- app/version.sh@19 -- # get_header_version patch 00:17:34.837 13:27:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:34.837 13:27:52 -- app/version.sh@14 -- # cut -f2 00:17:34.837 13:27:52 -- app/version.sh@14 -- # tr -d '"' 00:17:34.837 13:27:52 -- app/version.sh@19 -- # patch=0 00:17:34.837 13:27:52 -- app/version.sh@20 -- # get_header_version suffix 00:17:34.837 13:27:52 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:17:34.837 13:27:52 -- app/version.sh@14 -- # cut -f2 00:17:34.837 13:27:52 -- app/version.sh@14 -- # tr -d '"' 00:17:34.837 13:27:52 -- app/version.sh@20 -- # suffix=-pre 00:17:34.837 13:27:52 -- app/version.sh@22 -- # version=24.5 00:17:34.837 13:27:52 -- app/version.sh@25 -- # (( patch != 0 )) 00:17:34.837 13:27:52 -- app/version.sh@28 -- # version=24.5rc0 00:17:34.837 13:27:52 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:34.837 13:27:52 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:17:34.837 13:27:52 -- app/version.sh@30 -- # py_version=24.5rc0 00:17:34.837 13:27:52 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:17:34.837 00:17:34.837 real 0m0.151s 00:17:34.837 user 0m0.081s 00:17:34.837 sys 0m0.099s 00:17:34.837 13:27:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:34.837 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:17:34.837 ************************************ 00:17:34.837 END TEST version 00:17:34.837 ************************************ 00:17:34.837 13:27:52 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:17:34.837 13:27:52 -- spdk/autotest.sh@194 -- # uname -s 00:17:35.097 13:27:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:35.097 13:27:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:35.097 13:27:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:35.097 13:27:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:35.097 13:27:52 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:17:35.097 13:27:52 -- spdk/autotest.sh@258 -- # timing_exit lib 00:17:35.097 13:27:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:35.097 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.097 13:27:52 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:17:35.097 13:27:52 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:17:35.097 13:27:52 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:17:35.097 13:27:52 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:17:35.097 13:27:52 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:17:35.097 13:27:52 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:17:35.097 13:27:52 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:17:35.097 13:27:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:35.097 13:27:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.097 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.097 ************************************ 00:17:35.097 START TEST nvmf_tcp 00:17:35.097 ************************************ 00:17:35.097 13:27:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:17:35.097 * Looking for test storage... 00:17:35.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@10 -- # uname -s 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.097 13:27:52 -- nvmf/common.sh@7 -- # uname -s 00:17:35.097 13:27:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.097 13:27:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.097 13:27:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.097 13:27:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.097 13:27:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.097 13:27:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.097 13:27:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.097 13:27:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.097 13:27:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.097 13:27:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.097 13:27:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:17:35.097 13:27:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:17:35.097 13:27:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.097 13:27:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.097 13:27:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.097 13:27:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.097 13:27:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.097 13:27:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.097 13:27:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.097 13:27:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.097 13:27:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.097 13:27:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.097 13:27:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.097 13:27:52 -- paths/export.sh@5 -- # export PATH 00:17:35.097 13:27:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.097 13:27:52 -- nvmf/common.sh@47 -- # : 0 00:17:35.097 13:27:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.097 13:27:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.097 13:27:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.097 13:27:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.097 13:27:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.097 13:27:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.097 13:27:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.097 13:27:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:17:35.097 13:27:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.097 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:17:35.097 13:27:52 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:17:35.097 13:27:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:35.097 13:27:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.097 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.355 ************************************ 00:17:35.355 START TEST nvmf_example 00:17:35.355 ************************************ 00:17:35.355 13:27:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:17:35.355 * Looking for test storage... 00:17:35.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:35.355 13:27:52 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:35.355 13:27:52 -- nvmf/common.sh@7 -- # uname -s 00:17:35.355 13:27:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.355 13:27:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.355 13:27:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.355 13:27:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.355 13:27:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.356 13:27:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.356 13:27:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.356 13:27:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.356 13:27:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.356 13:27:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.356 13:27:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:17:35.356 13:27:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:17:35.356 13:27:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.356 13:27:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.356 13:27:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:35.356 13:27:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.356 13:27:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:35.356 13:27:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.356 13:27:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.356 13:27:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.356 13:27:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.356 13:27:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.356 13:27:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.356 13:27:52 -- paths/export.sh@5 -- # export PATH 00:17:35.356 13:27:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.356 13:27:52 -- nvmf/common.sh@47 -- # : 0 00:17:35.356 13:27:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.356 13:27:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.356 13:27:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.356 13:27:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.356 13:27:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.356 13:27:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.356 13:27:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.356 13:27:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.356 13:27:52 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:17:35.356 13:27:52 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:17:35.356 13:27:52 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:17:35.356 13:27:52 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:17:35.356 13:27:52 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:17:35.356 13:27:52 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:17:35.356 13:27:52 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:17:35.356 13:27:52 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:17:35.356 13:27:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.356 13:27:52 -- common/autotest_common.sh@10 -- # set +x 00:17:35.356 13:27:52 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:17:35.356 13:27:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:35.356 13:27:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.356 13:27:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:35.356 13:27:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:35.356 13:27:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:35.356 13:27:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.356 13:27:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.356 13:27:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.356 13:27:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:35.356 13:27:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:35.356 13:27:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:35.356 13:27:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:35.356 13:27:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:35.356 13:27:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:35.356 13:27:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.356 13:27:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.356 13:27:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:35.356 13:27:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:35.356 13:27:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:35.356 13:27:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:35.356 13:27:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:35.356 13:27:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.356 13:27:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:35.356 13:27:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:35.356 13:27:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:35.356 13:27:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:35.356 13:27:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:35.356 Cannot find device "nvmf_init_br" 00:17:35.356 13:27:52 -- nvmf/common.sh@154 -- # true 00:17:35.356 13:27:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:35.356 Cannot find device "nvmf_tgt_br" 00:17:35.356 13:27:52 -- nvmf/common.sh@155 -- # true 00:17:35.356 13:27:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:35.356 Cannot find device "nvmf_tgt_br2" 00:17:35.356 13:27:52 -- nvmf/common.sh@156 -- # true 00:17:35.356 13:27:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:35.356 Cannot find device "nvmf_init_br" 00:17:35.356 13:27:52 -- nvmf/common.sh@157 -- # true 00:17:35.356 13:27:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:35.356 Cannot find device "nvmf_tgt_br" 00:17:35.356 13:27:52 -- nvmf/common.sh@158 -- # true 00:17:35.356 13:27:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:35.356 Cannot find device "nvmf_tgt_br2" 00:17:35.356 13:27:52 -- nvmf/common.sh@159 -- # true 00:17:35.356 13:27:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:35.356 Cannot find device "nvmf_br" 00:17:35.356 13:27:52 -- nvmf/common.sh@160 -- # true 00:17:35.356 13:27:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:35.614 Cannot find device "nvmf_init_if" 00:17:35.614 13:27:52 -- nvmf/common.sh@161 -- # true 00:17:35.615 13:27:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:35.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.615 13:27:52 -- nvmf/common.sh@162 -- # true 00:17:35.615 13:27:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:35.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:35.615 13:27:52 -- nvmf/common.sh@163 -- # true 00:17:35.615 13:27:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:35.615 13:27:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:35.615 13:27:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:35.615 13:27:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:35.615 13:27:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.615 13:27:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.615 13:27:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.615 13:27:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:35.615 13:27:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:35.615 13:27:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:35.615 13:27:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:35.615 13:27:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:35.615 13:27:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:35.615 13:27:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.615 13:27:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.615 13:27:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.615 13:27:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:35.615 13:27:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:35.874 13:27:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.874 13:27:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.874 13:27:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.874 13:27:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.874 13:27:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.874 13:27:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:35.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:17:35.874 00:17:35.874 --- 10.0.0.2 ping statistics --- 00:17:35.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.874 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:35.874 13:27:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:35.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:35.874 00:17:35.874 --- 10.0.0.3 ping statistics --- 00:17:35.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.874 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:35.874 13:27:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:35.874 00:17:35.874 --- 10.0.0.1 ping statistics --- 00:17:35.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.874 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:35.874 13:27:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.874 13:27:53 -- nvmf/common.sh@422 -- # return 0 00:17:35.874 13:27:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:35.874 13:27:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.874 13:27:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:35.874 13:27:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:35.874 13:27:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.874 13:27:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:35.874 13:27:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:35.874 13:27:53 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:17:35.874 13:27:53 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:17:35.874 13:27:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:35.874 13:27:53 -- common/autotest_common.sh@10 -- # set +x 00:17:35.874 13:27:53 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:17:35.874 13:27:53 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:17:35.874 13:27:53 -- target/nvmf_example.sh@34 -- # nvmfpid=64912 00:17:35.874 13:27:53 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:17:35.874 13:27:53 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:35.874 13:27:53 -- target/nvmf_example.sh@36 -- # waitforlisten 64912 00:17:35.874 13:27:53 -- common/autotest_common.sh@817 -- # '[' -z 64912 ']' 00:17:35.874 13:27:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.874 13:27:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:35.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.874 13:27:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.874 13:27:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:35.874 13:27:53 -- common/autotest_common.sh@10 -- # set +x 00:17:36.808 13:27:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.808 13:27:54 -- common/autotest_common.sh@850 -- # return 0 00:17:36.808 13:27:54 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:17:36.808 13:27:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.808 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:37.066 13:27:54 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:37.066 13:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.066 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:37.066 13:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.066 13:27:54 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:17:37.066 13:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.066 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:37.066 13:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.066 13:27:54 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:17:37.066 13:27:54 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.066 13:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.066 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:37.066 13:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.066 13:27:54 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:17:37.066 13:27:54 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.066 13:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.066 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:37.066 13:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.066 13:27:54 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.066 13:27:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.066 13:27:54 -- common/autotest_common.sh@10 -- # set +x 00:17:37.066 13:27:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.066 13:27:54 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:37.066 13:27:54 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:49.270 Initializing NVMe Controllers 00:17:49.270 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.270 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:49.270 Initialization complete. Launching workers. 00:17:49.270 ======================================================== 00:17:49.270 Latency(us) 00:17:49.270 Device Information : IOPS MiB/s Average min max 00:17:49.270 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15216.25 59.44 4205.59 757.72 20233.03 00:17:49.270 ======================================================== 00:17:49.270 Total : 15216.25 59.44 4205.59 757.72 20233.03 00:17:49.270 00:17:49.270 13:28:04 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:17:49.270 13:28:04 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:17:49.270 13:28:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:49.270 13:28:04 -- nvmf/common.sh@117 -- # sync 00:17:49.270 13:28:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:49.270 13:28:04 -- nvmf/common.sh@120 -- # set +e 00:17:49.270 13:28:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:49.270 13:28:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:49.270 rmmod nvme_tcp 00:17:49.270 rmmod nvme_fabrics 00:17:49.270 rmmod nvme_keyring 00:17:49.270 13:28:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:49.270 13:28:04 -- nvmf/common.sh@124 -- # set -e 00:17:49.270 13:28:04 -- nvmf/common.sh@125 -- # return 0 00:17:49.270 13:28:04 -- nvmf/common.sh@478 -- # '[' -n 64912 ']' 00:17:49.270 13:28:04 -- nvmf/common.sh@479 -- # killprocess 64912 00:17:49.270 13:28:04 -- common/autotest_common.sh@936 -- # '[' -z 64912 ']' 00:17:49.270 13:28:04 -- common/autotest_common.sh@940 -- # kill -0 64912 00:17:49.270 13:28:04 -- common/autotest_common.sh@941 -- # uname 00:17:49.270 13:28:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.270 13:28:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64912 00:17:49.270 13:28:04 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:17:49.270 13:28:04 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:17:49.270 killing process with pid 64912 00:17:49.270 13:28:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64912' 00:17:49.270 13:28:04 -- common/autotest_common.sh@955 -- # kill 64912 00:17:49.270 13:28:04 -- common/autotest_common.sh@960 -- # wait 64912 00:17:49.270 nvmf threads initialize successfully 00:17:49.270 bdev subsystem init successfully 00:17:49.270 created a nvmf target service 00:17:49.270 create targets's poll groups done 00:17:49.270 all subsystems of target started 00:17:49.270 nvmf target is running 00:17:49.270 all subsystems of target stopped 00:17:49.270 destroy targets's poll groups done 00:17:49.270 destroyed the nvmf target service 00:17:49.270 bdev subsystem finish successfully 00:17:49.270 nvmf threads destroy successfully 00:17:49.270 13:28:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:49.270 13:28:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:49.270 13:28:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:49.270 13:28:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:49.270 13:28:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:49.270 13:28:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.270 13:28:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.270 13:28:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.270 13:28:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:49.270 13:28:04 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:17:49.270 13:28:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:49.270 13:28:04 -- common/autotest_common.sh@10 -- # set +x 00:17:49.270 00:17:49.270 real 0m12.420s 00:17:49.270 user 0m44.448s 00:17:49.270 sys 0m2.021s 00:17:49.270 13:28:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:49.270 ************************************ 00:17:49.270 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:17:49.270 END TEST nvmf_example 00:17:49.270 ************************************ 00:17:49.270 13:28:05 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:49.270 13:28:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:49.270 13:28:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:49.270 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:17:49.270 ************************************ 00:17:49.270 START TEST nvmf_filesystem 00:17:49.270 ************************************ 00:17:49.270 13:28:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:17:49.270 * Looking for test storage... 00:17:49.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:49.270 13:28:05 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:17:49.270 13:28:05 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:17:49.270 13:28:05 -- common/autotest_common.sh@34 -- # set -e 00:17:49.270 13:28:05 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:17:49.270 13:28:05 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:17:49.270 13:28:05 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:17:49.270 13:28:05 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:17:49.270 13:28:05 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:17:49.270 13:28:05 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:17:49.270 13:28:05 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:17:49.270 13:28:05 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:17:49.270 13:28:05 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:17:49.270 13:28:05 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:17:49.270 13:28:05 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:17:49.270 13:28:05 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:17:49.270 13:28:05 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:17:49.270 13:28:05 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:17:49.270 13:28:05 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:17:49.270 13:28:05 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:17:49.270 13:28:05 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:17:49.270 13:28:05 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:17:49.270 13:28:05 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:17:49.270 13:28:05 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:17:49.270 13:28:05 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:17:49.270 13:28:05 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:17:49.270 13:28:05 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:17:49.270 13:28:05 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:49.270 13:28:05 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:17:49.270 13:28:05 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:17:49.270 13:28:05 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:17:49.270 13:28:05 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:17:49.270 13:28:05 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:17:49.270 13:28:05 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:17:49.270 13:28:05 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:17:49.270 13:28:05 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:17:49.270 13:28:05 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:17:49.270 13:28:05 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:17:49.271 13:28:05 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:17:49.271 13:28:05 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:17:49.271 13:28:05 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:17:49.271 13:28:05 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:17:49.271 13:28:05 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:17:49.271 13:28:05 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:17:49.271 13:28:05 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:17:49.271 13:28:05 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:17:49.271 13:28:05 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:17:49.271 13:28:05 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:17:49.271 13:28:05 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:17:49.271 13:28:05 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:17:49.271 13:28:05 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:17:49.271 13:28:05 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:17:49.271 13:28:05 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:17:49.271 13:28:05 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:17:49.271 13:28:05 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:17:49.271 13:28:05 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:17:49.271 13:28:05 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:17:49.271 13:28:05 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:17:49.271 13:28:05 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:17:49.271 13:28:05 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:17:49.271 13:28:05 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:17:49.271 13:28:05 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:17:49.271 13:28:05 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:17:49.271 13:28:05 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:17:49.271 13:28:05 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:17:49.271 13:28:05 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:17:49.271 13:28:05 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:17:49.271 13:28:05 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:17:49.271 13:28:05 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:17:49.271 13:28:05 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:17:49.271 13:28:05 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:17:49.271 13:28:05 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:17:49.271 13:28:05 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:17:49.297 13:28:05 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:17:49.297 13:28:05 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:17:49.297 13:28:05 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:17:49.297 13:28:05 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:17:49.297 13:28:05 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:17:49.297 13:28:05 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:17:49.297 13:28:05 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:17:49.297 13:28:05 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:17:49.297 13:28:05 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:17:49.297 13:28:05 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:17:49.297 13:28:05 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:17:49.297 13:28:05 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:17:49.297 13:28:05 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:17:49.297 13:28:05 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:17:49.297 13:28:05 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:17:49.297 13:28:05 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:17:49.297 13:28:05 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:17:49.297 13:28:05 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:17:49.297 13:28:05 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:49.297 13:28:05 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:17:49.297 13:28:05 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:17:49.297 13:28:05 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:17:49.297 13:28:05 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:17:49.297 13:28:05 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:17:49.297 13:28:05 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:17:49.297 13:28:05 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:17:49.297 13:28:05 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:17:49.297 13:28:05 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:17:49.297 13:28:05 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:17:49.297 13:28:05 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:17:49.297 13:28:05 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:17:49.297 13:28:05 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:17:49.297 13:28:05 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:17:49.297 13:28:05 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:17:49.297 #define SPDK_CONFIG_H 00:17:49.297 #define SPDK_CONFIG_APPS 1 00:17:49.297 #define SPDK_CONFIG_ARCH native 00:17:49.297 #undef SPDK_CONFIG_ASAN 00:17:49.297 #define SPDK_CONFIG_AVAHI 1 00:17:49.297 #undef SPDK_CONFIG_CET 00:17:49.297 #define SPDK_CONFIG_COVERAGE 1 00:17:49.297 #define SPDK_CONFIG_CROSS_PREFIX 00:17:49.297 #undef SPDK_CONFIG_CRYPTO 00:17:49.297 #undef SPDK_CONFIG_CRYPTO_MLX5 00:17:49.297 #undef SPDK_CONFIG_CUSTOMOCF 00:17:49.297 #undef SPDK_CONFIG_DAOS 00:17:49.297 #define SPDK_CONFIG_DAOS_DIR 00:17:49.297 #define SPDK_CONFIG_DEBUG 1 00:17:49.297 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:17:49.297 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:49.298 #define SPDK_CONFIG_DPDK_INC_DIR 00:17:49.298 #define SPDK_CONFIG_DPDK_LIB_DIR 00:17:49.298 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:17:49.298 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:49.298 #define SPDK_CONFIG_EXAMPLES 1 00:17:49.298 #undef SPDK_CONFIG_FC 00:17:49.298 #define SPDK_CONFIG_FC_PATH 00:17:49.298 #define SPDK_CONFIG_FIO_PLUGIN 1 00:17:49.298 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:17:49.298 #undef SPDK_CONFIG_FUSE 00:17:49.298 #undef SPDK_CONFIG_FUZZER 00:17:49.298 #define SPDK_CONFIG_FUZZER_LIB 00:17:49.298 #define SPDK_CONFIG_GOLANG 1 00:17:49.298 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:17:49.298 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:17:49.298 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:17:49.298 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:17:49.298 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:17:49.298 #undef SPDK_CONFIG_HAVE_LIBBSD 00:17:49.298 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:17:49.298 #define SPDK_CONFIG_IDXD 1 00:17:49.298 #undef SPDK_CONFIG_IDXD_KERNEL 00:17:49.298 #undef SPDK_CONFIG_IPSEC_MB 00:17:49.298 #define SPDK_CONFIG_IPSEC_MB_DIR 00:17:49.298 #define SPDK_CONFIG_ISAL 1 00:17:49.298 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:17:49.298 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:17:49.298 #define SPDK_CONFIG_LIBDIR 00:17:49.298 #undef SPDK_CONFIG_LTO 00:17:49.298 #define SPDK_CONFIG_MAX_LCORES 00:17:49.298 #define SPDK_CONFIG_NVME_CUSE 1 00:17:49.298 #undef SPDK_CONFIG_OCF 00:17:49.298 #define SPDK_CONFIG_OCF_PATH 00:17:49.298 #define SPDK_CONFIG_OPENSSL_PATH 00:17:49.298 #undef SPDK_CONFIG_PGO_CAPTURE 00:17:49.298 #define SPDK_CONFIG_PGO_DIR 00:17:49.298 #undef SPDK_CONFIG_PGO_USE 00:17:49.298 #define SPDK_CONFIG_PREFIX /usr/local 00:17:49.298 #undef SPDK_CONFIG_RAID5F 00:17:49.298 #undef SPDK_CONFIG_RBD 00:17:49.298 #define SPDK_CONFIG_RDMA 1 00:17:49.298 #define SPDK_CONFIG_RDMA_PROV verbs 00:17:49.298 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:17:49.298 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:17:49.298 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:17:49.298 #define SPDK_CONFIG_SHARED 1 00:17:49.298 #undef SPDK_CONFIG_SMA 00:17:49.298 #define SPDK_CONFIG_TESTS 1 00:17:49.298 #undef SPDK_CONFIG_TSAN 00:17:49.298 #define SPDK_CONFIG_UBLK 1 00:17:49.298 #define SPDK_CONFIG_UBSAN 1 00:17:49.298 #undef SPDK_CONFIG_UNIT_TESTS 00:17:49.298 #undef SPDK_CONFIG_URING 00:17:49.298 #define SPDK_CONFIG_URING_PATH 00:17:49.298 #undef SPDK_CONFIG_URING_ZNS 00:17:49.298 #define SPDK_CONFIG_USDT 1 00:17:49.298 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:17:49.298 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:17:49.298 #undef SPDK_CONFIG_VFIO_USER 00:17:49.298 #define SPDK_CONFIG_VFIO_USER_DIR 00:17:49.298 #define SPDK_CONFIG_VHOST 1 00:17:49.298 #define SPDK_CONFIG_VIRTIO 1 00:17:49.298 #undef SPDK_CONFIG_VTUNE 00:17:49.298 #define SPDK_CONFIG_VTUNE_DIR 00:17:49.298 #define SPDK_CONFIG_WERROR 1 00:17:49.298 #define SPDK_CONFIG_WPDK_DIR 00:17:49.298 #undef SPDK_CONFIG_XNVME 00:17:49.298 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:17:49.298 13:28:05 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:17:49.298 13:28:05 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.298 13:28:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.298 13:28:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.298 13:28:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.298 13:28:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.298 13:28:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.298 13:28:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.298 13:28:05 -- paths/export.sh@5 -- # export PATH 00:17:49.298 13:28:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.298 13:28:05 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:49.298 13:28:05 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:17:49.298 13:28:05 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:49.298 13:28:05 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:17:49.298 13:28:05 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:17:49.298 13:28:05 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:17:49.298 13:28:05 -- pm/common@67 -- # TEST_TAG=N/A 00:17:49.298 13:28:05 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:17:49.298 13:28:05 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:17:49.298 13:28:05 -- pm/common@71 -- # uname -s 00:17:49.298 13:28:05 -- pm/common@71 -- # PM_OS=Linux 00:17:49.298 13:28:05 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:17:49.298 13:28:05 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:17:49.298 13:28:05 -- pm/common@76 -- # [[ Linux == Linux ]] 00:17:49.298 13:28:05 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:17:49.298 13:28:05 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:17:49.298 13:28:05 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:17:49.298 13:28:05 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:17:49.298 13:28:05 -- common/autotest_common.sh@57 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:17:49.298 13:28:05 -- common/autotest_common.sh@61 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:17:49.298 13:28:05 -- common/autotest_common.sh@63 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:17:49.298 13:28:05 -- common/autotest_common.sh@65 -- # : 1 00:17:49.298 13:28:05 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:17:49.298 13:28:05 -- common/autotest_common.sh@67 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:17:49.298 13:28:05 -- common/autotest_common.sh@69 -- # : 00:17:49.298 13:28:05 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:17:49.298 13:28:05 -- common/autotest_common.sh@71 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:17:49.298 13:28:05 -- common/autotest_common.sh@73 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:17:49.298 13:28:05 -- common/autotest_common.sh@75 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:17:49.298 13:28:05 -- common/autotest_common.sh@77 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:17:49.298 13:28:05 -- common/autotest_common.sh@79 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:17:49.298 13:28:05 -- common/autotest_common.sh@81 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:17:49.298 13:28:05 -- common/autotest_common.sh@83 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:17:49.298 13:28:05 -- common/autotest_common.sh@85 -- # : 0 00:17:49.298 13:28:05 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:17:49.298 13:28:05 -- common/autotest_common.sh@87 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:17:49.299 13:28:05 -- common/autotest_common.sh@89 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:17:49.299 13:28:05 -- common/autotest_common.sh@91 -- # : 1 00:17:49.299 13:28:05 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:17:49.299 13:28:05 -- common/autotest_common.sh@93 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:17:49.299 13:28:05 -- common/autotest_common.sh@95 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:17:49.299 13:28:05 -- common/autotest_common.sh@97 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:17:49.299 13:28:05 -- common/autotest_common.sh@99 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:17:49.299 13:28:05 -- common/autotest_common.sh@101 -- # : tcp 00:17:49.299 13:28:05 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:17:49.299 13:28:05 -- common/autotest_common.sh@103 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:17:49.299 13:28:05 -- common/autotest_common.sh@105 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:17:49.299 13:28:05 -- common/autotest_common.sh@107 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:17:49.299 13:28:05 -- common/autotest_common.sh@109 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:17:49.299 13:28:05 -- common/autotest_common.sh@111 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:17:49.299 13:28:05 -- common/autotest_common.sh@113 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:17:49.299 13:28:05 -- common/autotest_common.sh@115 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:17:49.299 13:28:05 -- common/autotest_common.sh@117 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:17:49.299 13:28:05 -- common/autotest_common.sh@119 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:17:49.299 13:28:05 -- common/autotest_common.sh@121 -- # : 1 00:17:49.299 13:28:05 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:17:49.299 13:28:05 -- common/autotest_common.sh@123 -- # : 00:17:49.299 13:28:05 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:17:49.299 13:28:05 -- common/autotest_common.sh@125 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:17:49.299 13:28:05 -- common/autotest_common.sh@127 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:17:49.299 13:28:05 -- common/autotest_common.sh@129 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:17:49.299 13:28:05 -- common/autotest_common.sh@131 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:17:49.299 13:28:05 -- common/autotest_common.sh@133 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:17:49.299 13:28:05 -- common/autotest_common.sh@135 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:17:49.299 13:28:05 -- common/autotest_common.sh@137 -- # : 00:17:49.299 13:28:05 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:17:49.299 13:28:05 -- common/autotest_common.sh@139 -- # : true 00:17:49.299 13:28:05 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:17:49.299 13:28:05 -- common/autotest_common.sh@141 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:17:49.299 13:28:05 -- common/autotest_common.sh@143 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:17:49.299 13:28:05 -- common/autotest_common.sh@145 -- # : 1 00:17:49.299 13:28:05 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:17:49.299 13:28:05 -- common/autotest_common.sh@147 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:17:49.299 13:28:05 -- common/autotest_common.sh@149 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:17:49.299 13:28:05 -- common/autotest_common.sh@151 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:17:49.299 13:28:05 -- common/autotest_common.sh@153 -- # : 00:17:49.299 13:28:05 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:17:49.299 13:28:05 -- common/autotest_common.sh@155 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:17:49.299 13:28:05 -- common/autotest_common.sh@157 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:17:49.299 13:28:05 -- common/autotest_common.sh@159 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:17:49.299 13:28:05 -- common/autotest_common.sh@161 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:17:49.299 13:28:05 -- common/autotest_common.sh@163 -- # : 0 00:17:49.299 13:28:05 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:17:49.299 13:28:05 -- common/autotest_common.sh@166 -- # : 00:17:49.299 13:28:05 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:17:49.299 13:28:05 -- common/autotest_common.sh@168 -- # : 1 00:17:49.299 13:28:05 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:17:49.299 13:28:05 -- common/autotest_common.sh@170 -- # : 1 00:17:49.299 13:28:05 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:17:49.299 13:28:05 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:17:49.299 13:28:05 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:17:49.299 13:28:05 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:17:49.299 13:28:05 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:49.299 13:28:05 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:17:49.299 13:28:05 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:17:49.299 13:28:05 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:17:49.299 13:28:05 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:49.299 13:28:05 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:17:49.299 13:28:05 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:49.299 13:28:05 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:17:49.299 13:28:05 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:17:49.299 13:28:05 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:17:49.299 13:28:05 -- common/autotest_common.sh@199 -- # cat 00:17:49.299 13:28:05 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:17:49.299 13:28:05 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:49.299 13:28:05 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:17:49.299 13:28:05 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:49.299 13:28:05 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:17:49.299 13:28:05 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:17:49.299 13:28:05 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:17:49.299 13:28:05 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:49.299 13:28:05 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:17:49.299 13:28:05 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:49.299 13:28:05 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:17:49.299 13:28:05 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:49.299 13:28:05 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:49.299 13:28:05 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:49.299 13:28:05 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:49.299 13:28:05 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:49.299 13:28:05 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:17:49.299 13:28:05 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:49.299 13:28:05 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:49.300 13:28:05 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:17:49.300 13:28:05 -- common/autotest_common.sh@252 -- # export valgrind= 00:17:49.300 13:28:05 -- common/autotest_common.sh@252 -- # valgrind= 00:17:49.300 13:28:05 -- common/autotest_common.sh@258 -- # uname -s 00:17:49.300 13:28:05 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:17:49.300 13:28:05 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:17:49.300 13:28:05 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:17:49.300 13:28:05 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:17:49.300 13:28:05 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@268 -- # MAKE=make 00:17:49.300 13:28:05 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:17:49.300 13:28:05 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:17:49.300 13:28:05 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:17:49.300 13:28:05 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:17:49.300 13:28:05 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:17:49.300 13:28:05 -- common/autotest_common.sh@289 -- # for i in "$@" 00:17:49.300 13:28:05 -- common/autotest_common.sh@290 -- # case "$i" in 00:17:49.300 13:28:05 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:17:49.300 13:28:05 -- common/autotest_common.sh@307 -- # [[ -z 65178 ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@307 -- # kill -0 65178 00:17:49.300 13:28:05 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:17:49.300 13:28:05 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:17:49.300 13:28:05 -- common/autotest_common.sh@320 -- # local mount target_dir 00:17:49.300 13:28:05 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:17:49.300 13:28:05 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:17:49.300 13:28:05 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:17:49.300 13:28:05 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:17:49.300 13:28:05 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.DarAWD 00:17:49.300 13:28:05 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:17:49.300 13:28:05 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.DarAWD/tests/target /tmp/spdk.DarAWD 00:17:49.300 13:28:05 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:17:49.300 13:28:05 -- common/autotest_common.sh@316 -- # df -T 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=6266609664 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267887616 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=1277952 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=13794525184 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=5230002176 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=13794525184 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=5230002176 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267756544 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=135168 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:17:49.300 13:28:05 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # avails["$mount"]=93283217408 00:17:49.300 13:28:05 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:17:49.300 13:28:05 -- common/autotest_common.sh@352 -- # uses["$mount"]=6419562496 00:17:49.300 13:28:05 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:17:49.300 13:28:05 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:17:49.300 * Looking for test storage... 00:17:49.300 13:28:05 -- common/autotest_common.sh@357 -- # local target_space new_size 00:17:49.300 13:28:05 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:17:49.300 13:28:05 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:17:49.300 13:28:05 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:49.300 13:28:05 -- common/autotest_common.sh@361 -- # mount=/home 00:17:49.300 13:28:05 -- common/autotest_common.sh@363 -- # target_space=13794525184 00:17:49.300 13:28:05 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:17:49.300 13:28:05 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:17:49.300 13:28:05 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:17:49.300 13:28:05 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:49.300 13:28:05 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:49.300 13:28:05 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:49.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:49.300 13:28:05 -- common/autotest_common.sh@378 -- # return 0 00:17:49.300 13:28:05 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:17:49.300 13:28:05 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:17:49.300 13:28:05 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:17:49.300 13:28:05 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:17:49.300 13:28:05 -- common/autotest_common.sh@1673 -- # true 00:17:49.301 13:28:05 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:17:49.301 13:28:05 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:17:49.301 13:28:05 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:17:49.301 13:28:05 -- common/autotest_common.sh@27 -- # exec 00:17:49.301 13:28:05 -- common/autotest_common.sh@29 -- # exec 00:17:49.301 13:28:05 -- common/autotest_common.sh@31 -- # xtrace_restore 00:17:49.301 13:28:05 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:17:49.301 13:28:05 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:17:49.301 13:28:05 -- common/autotest_common.sh@18 -- # set -x 00:17:49.301 13:28:05 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:49.301 13:28:05 -- nvmf/common.sh@7 -- # uname -s 00:17:49.301 13:28:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.301 13:28:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.301 13:28:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.301 13:28:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.301 13:28:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.301 13:28:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.301 13:28:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.301 13:28:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.301 13:28:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.301 13:28:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.301 13:28:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:17:49.301 13:28:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:17:49.301 13:28:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.301 13:28:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.301 13:28:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:49.301 13:28:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.301 13:28:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.301 13:28:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.301 13:28:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.301 13:28:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.301 13:28:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.301 13:28:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.301 13:28:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.301 13:28:05 -- paths/export.sh@5 -- # export PATH 00:17:49.301 13:28:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.301 13:28:05 -- nvmf/common.sh@47 -- # : 0 00:17:49.301 13:28:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.301 13:28:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.301 13:28:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.301 13:28:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.301 13:28:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.301 13:28:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.301 13:28:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.301 13:28:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.301 13:28:05 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:17:49.301 13:28:05 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:49.301 13:28:05 -- target/filesystem.sh@15 -- # nvmftestinit 00:17:49.301 13:28:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:49.301 13:28:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.301 13:28:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:49.301 13:28:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:49.301 13:28:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:49.301 13:28:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.301 13:28:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.301 13:28:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.301 13:28:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:49.301 13:28:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:49.301 13:28:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:49.301 13:28:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:49.301 13:28:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:49.301 13:28:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:49.301 13:28:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.301 13:28:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.301 13:28:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:49.301 13:28:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:49.301 13:28:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:49.301 13:28:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:49.301 13:28:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:49.301 13:28:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.301 13:28:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:49.301 13:28:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:49.301 13:28:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:49.301 13:28:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:49.301 13:28:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:49.301 13:28:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:49.301 Cannot find device "nvmf_tgt_br" 00:17:49.301 13:28:05 -- nvmf/common.sh@155 -- # true 00:17:49.301 13:28:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.301 Cannot find device "nvmf_tgt_br2" 00:17:49.301 13:28:05 -- nvmf/common.sh@156 -- # true 00:17:49.301 13:28:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:49.301 13:28:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:49.301 Cannot find device "nvmf_tgt_br" 00:17:49.301 13:28:05 -- nvmf/common.sh@158 -- # true 00:17:49.301 13:28:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:49.302 Cannot find device "nvmf_tgt_br2" 00:17:49.302 13:28:05 -- nvmf/common.sh@159 -- # true 00:17:49.302 13:28:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:49.302 13:28:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:49.302 13:28:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.302 13:28:05 -- nvmf/common.sh@162 -- # true 00:17:49.302 13:28:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.302 13:28:05 -- nvmf/common.sh@163 -- # true 00:17:49.302 13:28:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:49.302 13:28:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:49.302 13:28:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:49.302 13:28:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:49.302 13:28:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:49.302 13:28:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:49.302 13:28:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:49.302 13:28:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:49.302 13:28:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:49.302 13:28:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:49.302 13:28:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:49.302 13:28:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:49.302 13:28:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:49.302 13:28:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:49.302 13:28:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:49.302 13:28:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:49.302 13:28:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:49.302 13:28:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:49.302 13:28:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.302 13:28:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:49.302 13:28:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:49.302 13:28:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:49.302 13:28:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.302 13:28:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:49.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:17:49.302 00:17:49.302 --- 10.0.0.2 ping statistics --- 00:17:49.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.302 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:17:49.302 13:28:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:49.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:49.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:17:49.302 00:17:49.302 --- 10.0.0.3 ping statistics --- 00:17:49.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.302 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:49.302 13:28:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:49.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:17:49.302 00:17:49.302 --- 10.0.0.1 ping statistics --- 00:17:49.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.302 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:17:49.302 13:28:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.302 13:28:05 -- nvmf/common.sh@422 -- # return 0 00:17:49.302 13:28:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:49.302 13:28:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.302 13:28:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:49.302 13:28:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:49.302 13:28:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.302 13:28:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:49.302 13:28:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:49.302 13:28:05 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:17:49.302 13:28:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:49.302 13:28:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:49.302 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:17:49.302 ************************************ 00:17:49.302 START TEST nvmf_filesystem_no_in_capsule 00:17:49.302 ************************************ 00:17:49.302 13:28:05 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:17:49.302 13:28:05 -- target/filesystem.sh@47 -- # in_capsule=0 00:17:49.302 13:28:05 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:49.302 13:28:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:49.302 13:28:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:49.302 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:17:49.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.302 13:28:05 -- nvmf/common.sh@470 -- # nvmfpid=65345 00:17:49.302 13:28:05 -- nvmf/common.sh@471 -- # waitforlisten 65345 00:17:49.302 13:28:05 -- common/autotest_common.sh@817 -- # '[' -z 65345 ']' 00:17:49.302 13:28:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.302 13:28:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:49.302 13:28:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:49.302 13:28:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.302 13:28:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:49.302 13:28:05 -- common/autotest_common.sh@10 -- # set +x 00:17:49.302 [2024-04-26 13:28:05.834548] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:49.302 [2024-04-26 13:28:05.834630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.302 [2024-04-26 13:28:05.970745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.302 [2024-04-26 13:28:06.088265] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.302 [2024-04-26 13:28:06.088330] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.302 [2024-04-26 13:28:06.088356] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.302 [2024-04-26 13:28:06.088365] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.302 [2024-04-26 13:28:06.088372] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.302 [2024-04-26 13:28:06.089312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.302 [2024-04-26 13:28:06.089464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.302 [2024-04-26 13:28:06.090421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.302 [2024-04-26 13:28:06.090471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.561 13:28:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.561 13:28:06 -- common/autotest_common.sh@850 -- # return 0 00:17:49.561 13:28:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:49.561 13:28:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:49.561 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:49.561 13:28:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.561 13:28:06 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:49.561 13:28:06 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:49.561 13:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.561 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:49.561 [2024-04-26 13:28:06.907314] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.561 13:28:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.561 13:28:06 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:49.561 13:28:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.561 13:28:06 -- common/autotest_common.sh@10 -- # set +x 00:17:49.820 Malloc1 00:17:49.820 13:28:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.820 13:28:07 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:49.820 13:28:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.820 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:17:49.820 13:28:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.820 13:28:07 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:49.820 13:28:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.820 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:17:49.820 13:28:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.820 13:28:07 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.820 13:28:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.820 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:17:49.820 [2024-04-26 13:28:07.099564] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.820 13:28:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.820 13:28:07 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:49.820 13:28:07 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:17:49.820 13:28:07 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:49.820 13:28:07 -- common/autotest_common.sh@1366 -- # local bs 00:17:49.820 13:28:07 -- common/autotest_common.sh@1367 -- # local nb 00:17:49.820 13:28:07 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:49.820 13:28:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.820 13:28:07 -- common/autotest_common.sh@10 -- # set +x 00:17:49.820 13:28:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.820 13:28:07 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:49.820 { 00:17:49.820 "aliases": [ 00:17:49.820 "8ab0e2f0-6522-4b83-bedf-55ec9f4a5ff5" 00:17:49.820 ], 00:17:49.820 "assigned_rate_limits": { 00:17:49.820 "r_mbytes_per_sec": 0, 00:17:49.820 "rw_ios_per_sec": 0, 00:17:49.820 "rw_mbytes_per_sec": 0, 00:17:49.820 "w_mbytes_per_sec": 0 00:17:49.820 }, 00:17:49.820 "block_size": 512, 00:17:49.820 "claim_type": "exclusive_write", 00:17:49.820 "claimed": true, 00:17:49.820 "driver_specific": {}, 00:17:49.820 "memory_domains": [ 00:17:49.820 { 00:17:49.820 "dma_device_id": "system", 00:17:49.820 "dma_device_type": 1 00:17:49.820 }, 00:17:49.820 { 00:17:49.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.820 "dma_device_type": 2 00:17:49.820 } 00:17:49.820 ], 00:17:49.820 "name": "Malloc1", 00:17:49.820 "num_blocks": 1048576, 00:17:49.820 "product_name": "Malloc disk", 00:17:49.820 "supported_io_types": { 00:17:49.820 "abort": true, 00:17:49.820 "compare": false, 00:17:49.820 "compare_and_write": false, 00:17:49.820 "flush": true, 00:17:49.820 "nvme_admin": false, 00:17:49.820 "nvme_io": false, 00:17:49.820 "read": true, 00:17:49.820 "reset": true, 00:17:49.820 "unmap": true, 00:17:49.820 "write": true, 00:17:49.820 "write_zeroes": true 00:17:49.820 }, 00:17:49.820 "uuid": "8ab0e2f0-6522-4b83-bedf-55ec9f4a5ff5", 00:17:49.820 "zoned": false 00:17:49.820 } 00:17:49.820 ]' 00:17:49.820 13:28:07 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:49.820 13:28:07 -- common/autotest_common.sh@1369 -- # bs=512 00:17:49.820 13:28:07 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:49.820 13:28:07 -- common/autotest_common.sh@1370 -- # nb=1048576 00:17:49.820 13:28:07 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:17:49.820 13:28:07 -- common/autotest_common.sh@1374 -- # echo 512 00:17:49.820 13:28:07 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:49.820 13:28:07 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:50.078 13:28:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:50.078 13:28:07 -- common/autotest_common.sh@1184 -- # local i=0 00:17:50.078 13:28:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:50.078 13:28:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:50.078 13:28:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:51.976 13:28:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:51.976 13:28:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:51.976 13:28:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:51.976 13:28:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:51.976 13:28:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:51.976 13:28:09 -- common/autotest_common.sh@1194 -- # return 0 00:17:52.233 13:28:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:17:52.233 13:28:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:17:52.233 13:28:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:17:52.233 13:28:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:17:52.233 13:28:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:17:52.233 13:28:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:52.233 13:28:09 -- setup/common.sh@80 -- # echo 536870912 00:17:52.233 13:28:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:17:52.233 13:28:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:17:52.234 13:28:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:17:52.234 13:28:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:17:52.234 13:28:09 -- target/filesystem.sh@69 -- # partprobe 00:17:52.234 13:28:09 -- target/filesystem.sh@70 -- # sleep 1 00:17:53.167 13:28:10 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:17:53.167 13:28:10 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:17:53.167 13:28:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:53.167 13:28:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.167 13:28:10 -- common/autotest_common.sh@10 -- # set +x 00:17:53.470 ************************************ 00:17:53.470 START TEST filesystem_ext4 00:17:53.470 ************************************ 00:17:53.470 13:28:10 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:17:53.470 13:28:10 -- target/filesystem.sh@18 -- # fstype=ext4 00:17:53.470 13:28:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:53.470 13:28:10 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:17:53.470 13:28:10 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:17:53.470 13:28:10 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:53.470 13:28:10 -- common/autotest_common.sh@914 -- # local i=0 00:17:53.470 13:28:10 -- common/autotest_common.sh@915 -- # local force 00:17:53.470 13:28:10 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:17:53.470 13:28:10 -- common/autotest_common.sh@918 -- # force=-F 00:17:53.470 13:28:10 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:17:53.470 mke2fs 1.46.5 (30-Dec-2021) 00:17:53.470 Discarding device blocks: 0/522240 done 00:17:53.470 Creating filesystem with 522240 1k blocks and 130560 inodes 00:17:53.470 Filesystem UUID: a66acb91-c599-40d3-b227-b3d774da7878 00:17:53.470 Superblock backups stored on blocks: 00:17:53.470 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:17:53.470 00:17:53.470 Allocating group tables: 0/64 done 00:17:53.470 Writing inode tables: 0/64 done 00:17:53.470 Creating journal (8192 blocks): done 00:17:53.470 Writing superblocks and filesystem accounting information: 0/64 done 00:17:53.470 00:17:53.470 13:28:10 -- common/autotest_common.sh@931 -- # return 0 00:17:53.470 13:28:10 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:53.729 13:28:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:53.729 13:28:10 -- target/filesystem.sh@25 -- # sync 00:17:53.729 13:28:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:53.729 13:28:11 -- target/filesystem.sh@27 -- # sync 00:17:53.729 13:28:11 -- target/filesystem.sh@29 -- # i=0 00:17:53.729 13:28:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:53.729 13:28:11 -- target/filesystem.sh@37 -- # kill -0 65345 00:17:53.729 13:28:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:53.729 13:28:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:53.729 13:28:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:53.729 13:28:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:53.729 00:17:53.729 real 0m0.400s 00:17:53.729 user 0m0.026s 00:17:53.729 sys 0m0.050s 00:17:53.729 13:28:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:53.729 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:17:53.729 ************************************ 00:17:53.729 END TEST filesystem_ext4 00:17:53.729 ************************************ 00:17:53.729 13:28:11 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:17:53.729 13:28:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:53.729 13:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.729 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:17:53.988 ************************************ 00:17:53.988 START TEST filesystem_btrfs 00:17:53.988 ************************************ 00:17:53.988 13:28:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:17:53.988 13:28:11 -- target/filesystem.sh@18 -- # fstype=btrfs 00:17:53.988 13:28:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:53.988 13:28:11 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:17:53.988 13:28:11 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:17:53.988 13:28:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:53.988 13:28:11 -- common/autotest_common.sh@914 -- # local i=0 00:17:53.988 13:28:11 -- common/autotest_common.sh@915 -- # local force 00:17:53.988 13:28:11 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:17:53.988 13:28:11 -- common/autotest_common.sh@920 -- # force=-f 00:17:53.988 13:28:11 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:17:53.988 btrfs-progs v6.6.2 00:17:53.988 See https://btrfs.readthedocs.io for more information. 00:17:53.988 00:17:53.988 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:17:53.988 NOTE: several default settings have changed in version 5.15, please make sure 00:17:53.988 this does not affect your deployments: 00:17:53.988 - DUP for metadata (-m dup) 00:17:53.988 - enabled no-holes (-O no-holes) 00:17:53.988 - enabled free-space-tree (-R free-space-tree) 00:17:53.988 00:17:53.988 Label: (null) 00:17:53.988 UUID: 9fed5d88-c6a8-4cb6-9cdc-7261f8cf3a05 00:17:53.988 Node size: 16384 00:17:53.988 Sector size: 4096 00:17:53.988 Filesystem size: 510.00MiB 00:17:53.988 Block group profiles: 00:17:53.988 Data: single 8.00MiB 00:17:53.988 Metadata: DUP 32.00MiB 00:17:53.988 System: DUP 8.00MiB 00:17:53.988 SSD detected: yes 00:17:53.988 Zoned device: no 00:17:53.988 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:17:53.988 Runtime features: free-space-tree 00:17:53.988 Checksum: crc32c 00:17:53.988 Number of devices: 1 00:17:53.988 Devices: 00:17:53.988 ID SIZE PATH 00:17:53.988 1 510.00MiB /dev/nvme0n1p1 00:17:53.988 00:17:53.988 13:28:11 -- common/autotest_common.sh@931 -- # return 0 00:17:53.988 13:28:11 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:54.247 13:28:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:54.247 13:28:11 -- target/filesystem.sh@25 -- # sync 00:17:54.247 13:28:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:54.247 13:28:11 -- target/filesystem.sh@27 -- # sync 00:17:54.247 13:28:11 -- target/filesystem.sh@29 -- # i=0 00:17:54.247 13:28:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:54.247 13:28:11 -- target/filesystem.sh@37 -- # kill -0 65345 00:17:54.247 13:28:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:54.247 13:28:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:54.247 13:28:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:54.248 13:28:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:54.248 00:17:54.248 real 0m0.363s 00:17:54.248 user 0m0.025s 00:17:54.248 sys 0m0.073s 00:17:54.248 13:28:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:54.248 ************************************ 00:17:54.248 END TEST filesystem_btrfs 00:17:54.248 ************************************ 00:17:54.248 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 13:28:11 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:17:54.248 13:28:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:54.248 13:28:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:54.248 13:28:11 -- common/autotest_common.sh@10 -- # set +x 00:17:54.248 ************************************ 00:17:54.248 START TEST filesystem_xfs 00:17:54.248 ************************************ 00:17:54.248 13:28:11 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:17:54.248 13:28:11 -- target/filesystem.sh@18 -- # fstype=xfs 00:17:54.248 13:28:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:17:54.248 13:28:11 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:17:54.248 13:28:11 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:17:54.248 13:28:11 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:17:54.248 13:28:11 -- common/autotest_common.sh@914 -- # local i=0 00:17:54.248 13:28:11 -- common/autotest_common.sh@915 -- # local force 00:17:54.248 13:28:11 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:17:54.248 13:28:11 -- common/autotest_common.sh@920 -- # force=-f 00:17:54.248 13:28:11 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:17:54.506 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:17:54.506 = sectsz=512 attr=2, projid32bit=1 00:17:54.506 = crc=1 finobt=1, sparse=1, rmapbt=0 00:17:54.506 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:17:54.506 data = bsize=4096 blocks=130560, imaxpct=25 00:17:54.506 = sunit=0 swidth=0 blks 00:17:54.506 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:17:54.506 log =internal log bsize=4096 blocks=16384, version=2 00:17:54.506 = sectsz=512 sunit=0 blks, lazy-count=1 00:17:54.506 realtime =none extsz=4096 blocks=0, rtextents=0 00:17:55.072 Discarding blocks...Done. 00:17:55.072 13:28:12 -- common/autotest_common.sh@931 -- # return 0 00:17:55.072 13:28:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:17:57.606 13:28:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:17:57.606 13:28:14 -- target/filesystem.sh@25 -- # sync 00:17:57.606 13:28:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:17:57.606 13:28:14 -- target/filesystem.sh@27 -- # sync 00:17:57.606 13:28:14 -- target/filesystem.sh@29 -- # i=0 00:17:57.606 13:28:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:17:57.606 13:28:14 -- target/filesystem.sh@37 -- # kill -0 65345 00:17:57.606 13:28:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:17:57.606 13:28:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:17:57.606 13:28:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:17:57.606 13:28:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:17:57.606 ************************************ 00:17:57.606 END TEST filesystem_xfs 00:17:57.606 ************************************ 00:17:57.606 00:17:57.606 real 0m3.183s 00:17:57.606 user 0m0.017s 00:17:57.606 sys 0m0.064s 00:17:57.606 13:28:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:57.606 13:28:14 -- common/autotest_common.sh@10 -- # set +x 00:17:57.606 13:28:14 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:17:57.606 13:28:14 -- target/filesystem.sh@93 -- # sync 00:17:57.606 13:28:14 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:57.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.606 13:28:15 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:57.606 13:28:15 -- common/autotest_common.sh@1205 -- # local i=0 00:17:57.606 13:28:15 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:57.606 13:28:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.606 13:28:15 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:57.606 13:28:15 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.606 13:28:15 -- common/autotest_common.sh@1217 -- # return 0 00:17:57.606 13:28:15 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.606 13:28:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:57.606 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:17:57.606 13:28:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:57.606 13:28:15 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:57.606 13:28:15 -- target/filesystem.sh@101 -- # killprocess 65345 00:17:57.606 13:28:15 -- common/autotest_common.sh@936 -- # '[' -z 65345 ']' 00:17:57.606 13:28:15 -- common/autotest_common.sh@940 -- # kill -0 65345 00:17:57.606 13:28:15 -- common/autotest_common.sh@941 -- # uname 00:17:57.606 13:28:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:57.606 13:28:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65345 00:17:57.864 killing process with pid 65345 00:17:57.864 13:28:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:57.864 13:28:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:57.864 13:28:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65345' 00:17:57.864 13:28:15 -- common/autotest_common.sh@955 -- # kill 65345 00:17:57.864 13:28:15 -- common/autotest_common.sh@960 -- # wait 65345 00:17:58.123 13:28:15 -- target/filesystem.sh@102 -- # nvmfpid= 00:17:58.123 00:17:58.123 real 0m9.752s 00:17:58.123 user 0m36.877s 00:17:58.123 sys 0m1.783s 00:17:58.123 13:28:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:58.123 ************************************ 00:17:58.123 END TEST nvmf_filesystem_no_in_capsule 00:17:58.123 ************************************ 00:17:58.123 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:17:58.123 13:28:15 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:17:58.123 13:28:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:58.123 13:28:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.123 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:17:58.382 ************************************ 00:17:58.382 START TEST nvmf_filesystem_in_capsule 00:17:58.382 ************************************ 00:17:58.382 13:28:15 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:17:58.382 13:28:15 -- target/filesystem.sh@47 -- # in_capsule=4096 00:17:58.382 13:28:15 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:17:58.382 13:28:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:58.382 13:28:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:58.382 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:17:58.382 13:28:15 -- nvmf/common.sh@470 -- # nvmfpid=65682 00:17:58.382 13:28:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.382 13:28:15 -- nvmf/common.sh@471 -- # waitforlisten 65682 00:17:58.382 13:28:15 -- common/autotest_common.sh@817 -- # '[' -z 65682 ']' 00:17:58.382 13:28:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.382 13:28:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:58.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.382 13:28:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.382 13:28:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:58.382 13:28:15 -- common/autotest_common.sh@10 -- # set +x 00:17:58.382 [2024-04-26 13:28:15.708251] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:17:58.382 [2024-04-26 13:28:15.708358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.640 [2024-04-26 13:28:15.845093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.640 [2024-04-26 13:28:15.962118] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.640 [2024-04-26 13:28:15.962205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.640 [2024-04-26 13:28:15.962233] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.640 [2024-04-26 13:28:15.962242] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.640 [2024-04-26 13:28:15.962250] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.640 [2024-04-26 13:28:15.962409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.640 [2024-04-26 13:28:15.962498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.640 [2024-04-26 13:28:15.963219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.640 [2024-04-26 13:28:15.963270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.576 13:28:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:59.576 13:28:16 -- common/autotest_common.sh@850 -- # return 0 00:17:59.576 13:28:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:59.576 13:28:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:59.576 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 13:28:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.576 13:28:16 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:17:59.576 13:28:16 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:17:59.576 13:28:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.576 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 [2024-04-26 13:28:16.696868] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.576 13:28:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.576 13:28:16 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:17:59.576 13:28:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.576 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 Malloc1 00:17:59.576 13:28:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.576 13:28:16 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:59.576 13:28:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.576 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 13:28:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.576 13:28:16 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:59.576 13:28:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.576 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 13:28:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.576 13:28:16 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.576 13:28:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.576 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 [2024-04-26 13:28:16.882620] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.576 13:28:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.576 13:28:16 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:17:59.576 13:28:16 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:17:59.576 13:28:16 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:59.576 13:28:16 -- common/autotest_common.sh@1366 -- # local bs 00:17:59.576 13:28:16 -- common/autotest_common.sh@1367 -- # local nb 00:17:59.576 13:28:16 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:17:59.576 13:28:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:59.576 13:28:16 -- common/autotest_common.sh@10 -- # set +x 00:17:59.576 13:28:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:59.576 13:28:16 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:59.576 { 00:17:59.576 "aliases": [ 00:17:59.576 "09617e07-dd38-4950-a945-59b19e24a008" 00:17:59.576 ], 00:17:59.576 "assigned_rate_limits": { 00:17:59.576 "r_mbytes_per_sec": 0, 00:17:59.576 "rw_ios_per_sec": 0, 00:17:59.576 "rw_mbytes_per_sec": 0, 00:17:59.576 "w_mbytes_per_sec": 0 00:17:59.576 }, 00:17:59.576 "block_size": 512, 00:17:59.576 "claim_type": "exclusive_write", 00:17:59.576 "claimed": true, 00:17:59.576 "driver_specific": {}, 00:17:59.576 "memory_domains": [ 00:17:59.576 { 00:17:59.576 "dma_device_id": "system", 00:17:59.576 "dma_device_type": 1 00:17:59.576 }, 00:17:59.576 { 00:17:59.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.576 "dma_device_type": 2 00:17:59.576 } 00:17:59.576 ], 00:17:59.576 "name": "Malloc1", 00:17:59.576 "num_blocks": 1048576, 00:17:59.576 "product_name": "Malloc disk", 00:17:59.576 "supported_io_types": { 00:17:59.576 "abort": true, 00:17:59.576 "compare": false, 00:17:59.576 "compare_and_write": false, 00:17:59.576 "flush": true, 00:17:59.576 "nvme_admin": false, 00:17:59.576 "nvme_io": false, 00:17:59.576 "read": true, 00:17:59.576 "reset": true, 00:17:59.576 "unmap": true, 00:17:59.576 "write": true, 00:17:59.576 "write_zeroes": true 00:17:59.576 }, 00:17:59.576 "uuid": "09617e07-dd38-4950-a945-59b19e24a008", 00:17:59.576 "zoned": false 00:17:59.576 } 00:17:59.576 ]' 00:17:59.576 13:28:16 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:59.576 13:28:16 -- common/autotest_common.sh@1369 -- # bs=512 00:17:59.576 13:28:16 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:59.576 13:28:17 -- common/autotest_common.sh@1370 -- # nb=1048576 00:17:59.576 13:28:17 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:17:59.576 13:28:17 -- common/autotest_common.sh@1374 -- # echo 512 00:17:59.576 13:28:17 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:17:59.577 13:28:17 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.836 13:28:17 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.836 13:28:17 -- common/autotest_common.sh@1184 -- # local i=0 00:17:59.836 13:28:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.836 13:28:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:59.836 13:28:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:01.740 13:28:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:01.740 13:28:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:01.740 13:28:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.999 13:28:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:01.999 13:28:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.999 13:28:19 -- common/autotest_common.sh@1194 -- # return 0 00:18:01.999 13:28:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:18:01.999 13:28:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:18:01.999 13:28:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:18:01.999 13:28:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:18:01.999 13:28:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:18:01.999 13:28:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:01.999 13:28:19 -- setup/common.sh@80 -- # echo 536870912 00:18:01.999 13:28:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:18:01.999 13:28:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:18:01.999 13:28:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:18:01.999 13:28:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:18:01.999 13:28:19 -- target/filesystem.sh@69 -- # partprobe 00:18:01.999 13:28:19 -- target/filesystem.sh@70 -- # sleep 1 00:18:02.934 13:28:20 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:18:02.934 13:28:20 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:18:02.934 13:28:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:02.935 13:28:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.935 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.193 ************************************ 00:18:03.193 START TEST filesystem_in_capsule_ext4 00:18:03.193 ************************************ 00:18:03.193 13:28:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:18:03.193 13:28:20 -- target/filesystem.sh@18 -- # fstype=ext4 00:18:03.193 13:28:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:18:03.193 13:28:20 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:18:03.193 13:28:20 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:18:03.193 13:28:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:18:03.193 13:28:20 -- common/autotest_common.sh@914 -- # local i=0 00:18:03.193 13:28:20 -- common/autotest_common.sh@915 -- # local force 00:18:03.193 13:28:20 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:18:03.193 13:28:20 -- common/autotest_common.sh@918 -- # force=-F 00:18:03.193 13:28:20 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:18:03.193 mke2fs 1.46.5 (30-Dec-2021) 00:18:03.193 Discarding device blocks: 0/522240 done 00:18:03.193 Creating filesystem with 522240 1k blocks and 130560 inodes 00:18:03.193 Filesystem UUID: 4ec02483-99be-4a16-8baf-0a0d66d5510e 00:18:03.193 Superblock backups stored on blocks: 00:18:03.193 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:18:03.193 00:18:03.193 Allocating group tables: 0/64 done 00:18:03.193 Writing inode tables: 0/64 done 00:18:03.193 Creating journal (8192 blocks): done 00:18:03.193 Writing superblocks and filesystem accounting information: 0/64 done 00:18:03.193 00:18:03.193 13:28:20 -- common/autotest_common.sh@931 -- # return 0 00:18:03.193 13:28:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:18:03.452 13:28:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:18:03.452 13:28:20 -- target/filesystem.sh@25 -- # sync 00:18:03.452 13:28:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:18:03.452 13:28:20 -- target/filesystem.sh@27 -- # sync 00:18:03.452 13:28:20 -- target/filesystem.sh@29 -- # i=0 00:18:03.452 13:28:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:18:03.452 13:28:20 -- target/filesystem.sh@37 -- # kill -0 65682 00:18:03.452 13:28:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:18:03.452 13:28:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:18:03.452 13:28:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:18:03.452 13:28:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:18:03.452 00:18:03.452 real 0m0.388s 00:18:03.452 user 0m0.022s 00:18:03.452 sys 0m0.045s 00:18:03.452 13:28:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:03.452 ************************************ 00:18:03.452 END TEST filesystem_in_capsule_ext4 00:18:03.452 ************************************ 00:18:03.452 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.452 13:28:20 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:18:03.452 13:28:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:03.452 13:28:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.452 13:28:20 -- common/autotest_common.sh@10 -- # set +x 00:18:03.452 ************************************ 00:18:03.452 START TEST filesystem_in_capsule_btrfs 00:18:03.452 ************************************ 00:18:03.452 13:28:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:18:03.452 13:28:20 -- target/filesystem.sh@18 -- # fstype=btrfs 00:18:03.452 13:28:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:18:03.452 13:28:20 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:18:03.452 13:28:20 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:18:03.452 13:28:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:18:03.452 13:28:20 -- common/autotest_common.sh@914 -- # local i=0 00:18:03.452 13:28:20 -- common/autotest_common.sh@915 -- # local force 00:18:03.452 13:28:20 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:18:03.452 13:28:20 -- common/autotest_common.sh@920 -- # force=-f 00:18:03.452 13:28:20 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:18:03.711 btrfs-progs v6.6.2 00:18:03.711 See https://btrfs.readthedocs.io for more information. 00:18:03.711 00:18:03.711 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:18:03.711 NOTE: several default settings have changed in version 5.15, please make sure 00:18:03.711 this does not affect your deployments: 00:18:03.711 - DUP for metadata (-m dup) 00:18:03.711 - enabled no-holes (-O no-holes) 00:18:03.711 - enabled free-space-tree (-R free-space-tree) 00:18:03.711 00:18:03.711 Label: (null) 00:18:03.711 UUID: 0244b957-6b37-4beb-ae40-9861fefe3918 00:18:03.711 Node size: 16384 00:18:03.711 Sector size: 4096 00:18:03.711 Filesystem size: 510.00MiB 00:18:03.711 Block group profiles: 00:18:03.711 Data: single 8.00MiB 00:18:03.711 Metadata: DUP 32.00MiB 00:18:03.711 System: DUP 8.00MiB 00:18:03.711 SSD detected: yes 00:18:03.711 Zoned device: no 00:18:03.711 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:18:03.711 Runtime features: free-space-tree 00:18:03.711 Checksum: crc32c 00:18:03.711 Number of devices: 1 00:18:03.711 Devices: 00:18:03.711 ID SIZE PATH 00:18:03.711 1 510.00MiB /dev/nvme0n1p1 00:18:03.711 00:18:03.711 13:28:21 -- common/autotest_common.sh@931 -- # return 0 00:18:03.711 13:28:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:18:03.711 13:28:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:18:03.711 13:28:21 -- target/filesystem.sh@25 -- # sync 00:18:03.711 13:28:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:18:03.711 13:28:21 -- target/filesystem.sh@27 -- # sync 00:18:03.711 13:28:21 -- target/filesystem.sh@29 -- # i=0 00:18:03.711 13:28:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:18:03.711 13:28:21 -- target/filesystem.sh@37 -- # kill -0 65682 00:18:03.711 13:28:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:18:03.711 13:28:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:18:03.711 13:28:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:18:03.711 13:28:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:18:03.711 00:18:03.712 real 0m0.223s 00:18:03.712 user 0m0.023s 00:18:03.712 sys 0m0.060s 00:18:03.712 13:28:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:03.712 13:28:21 -- common/autotest_common.sh@10 -- # set +x 00:18:03.712 ************************************ 00:18:03.712 END TEST filesystem_in_capsule_btrfs 00:18:03.712 ************************************ 00:18:03.712 13:28:21 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:18:03.712 13:28:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:03.712 13:28:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.712 13:28:21 -- common/autotest_common.sh@10 -- # set +x 00:18:03.970 ************************************ 00:18:03.970 START TEST filesystem_in_capsule_xfs 00:18:03.970 ************************************ 00:18:03.970 13:28:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:18:03.970 13:28:21 -- target/filesystem.sh@18 -- # fstype=xfs 00:18:03.970 13:28:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:18:03.970 13:28:21 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:18:03.970 13:28:21 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:18:03.970 13:28:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:18:03.970 13:28:21 -- common/autotest_common.sh@914 -- # local i=0 00:18:03.970 13:28:21 -- common/autotest_common.sh@915 -- # local force 00:18:03.970 13:28:21 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:18:03.970 13:28:21 -- common/autotest_common.sh@920 -- # force=-f 00:18:03.970 13:28:21 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:18:03.970 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:18:03.970 = sectsz=512 attr=2, projid32bit=1 00:18:03.970 = crc=1 finobt=1, sparse=1, rmapbt=0 00:18:03.971 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:18:03.971 data = bsize=4096 blocks=130560, imaxpct=25 00:18:03.971 = sunit=0 swidth=0 blks 00:18:03.971 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:18:03.971 log =internal log bsize=4096 blocks=16384, version=2 00:18:03.971 = sectsz=512 sunit=0 blks, lazy-count=1 00:18:03.971 realtime =none extsz=4096 blocks=0, rtextents=0 00:18:04.905 Discarding blocks...Done. 00:18:04.905 13:28:21 -- common/autotest_common.sh@931 -- # return 0 00:18:04.905 13:28:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:18:06.376 13:28:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:18:06.376 13:28:23 -- target/filesystem.sh@25 -- # sync 00:18:06.376 13:28:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:18:06.376 13:28:23 -- target/filesystem.sh@27 -- # sync 00:18:06.376 13:28:23 -- target/filesystem.sh@29 -- # i=0 00:18:06.376 13:28:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:18:06.376 13:28:23 -- target/filesystem.sh@37 -- # kill -0 65682 00:18:06.376 13:28:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:18:06.376 13:28:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:18:06.634 13:28:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:18:06.635 13:28:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:18:06.635 00:18:06.635 real 0m2.617s 00:18:06.635 user 0m0.024s 00:18:06.635 sys 0m0.050s 00:18:06.635 13:28:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:06.635 13:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:06.635 ************************************ 00:18:06.635 END TEST filesystem_in_capsule_xfs 00:18:06.635 ************************************ 00:18:06.635 13:28:23 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:18:06.635 13:28:23 -- target/filesystem.sh@93 -- # sync 00:18:06.635 13:28:23 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.635 13:28:23 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.635 13:28:23 -- common/autotest_common.sh@1205 -- # local i=0 00:18:06.635 13:28:23 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:06.635 13:28:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.635 13:28:23 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:06.635 13:28:23 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.635 13:28:23 -- common/autotest_common.sh@1217 -- # return 0 00:18:06.635 13:28:23 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.635 13:28:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:06.635 13:28:23 -- common/autotest_common.sh@10 -- # set +x 00:18:06.635 13:28:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:06.635 13:28:23 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:06.635 13:28:23 -- target/filesystem.sh@101 -- # killprocess 65682 00:18:06.635 13:28:23 -- common/autotest_common.sh@936 -- # '[' -z 65682 ']' 00:18:06.635 13:28:23 -- common/autotest_common.sh@940 -- # kill -0 65682 00:18:06.635 13:28:23 -- common/autotest_common.sh@941 -- # uname 00:18:06.635 13:28:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:06.635 13:28:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65682 00:18:06.635 13:28:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:06.635 killing process with pid 65682 00:18:06.635 13:28:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:06.635 13:28:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65682' 00:18:06.635 13:28:24 -- common/autotest_common.sh@955 -- # kill 65682 00:18:06.635 13:28:24 -- common/autotest_common.sh@960 -- # wait 65682 00:18:07.201 13:28:24 -- target/filesystem.sh@102 -- # nvmfpid= 00:18:07.201 00:18:07.201 real 0m8.802s 00:18:07.201 user 0m33.116s 00:18:07.201 sys 0m1.668s 00:18:07.201 13:28:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.201 13:28:24 -- common/autotest_common.sh@10 -- # set +x 00:18:07.201 ************************************ 00:18:07.201 END TEST nvmf_filesystem_in_capsule 00:18:07.201 ************************************ 00:18:07.201 13:28:24 -- target/filesystem.sh@108 -- # nvmftestfini 00:18:07.201 13:28:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:07.201 13:28:24 -- nvmf/common.sh@117 -- # sync 00:18:07.201 13:28:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:07.201 13:28:24 -- nvmf/common.sh@120 -- # set +e 00:18:07.201 13:28:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:07.201 13:28:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:07.201 rmmod nvme_tcp 00:18:07.201 rmmod nvme_fabrics 00:18:07.201 rmmod nvme_keyring 00:18:07.201 13:28:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:07.201 13:28:24 -- nvmf/common.sh@124 -- # set -e 00:18:07.201 13:28:24 -- nvmf/common.sh@125 -- # return 0 00:18:07.201 13:28:24 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:07.201 13:28:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:07.201 13:28:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:07.201 13:28:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:07.201 13:28:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.201 13:28:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.201 13:28:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.201 13:28:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.201 13:28:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.201 13:28:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:07.201 00:18:07.201 real 0m19.480s 00:18:07.201 user 1m10.271s 00:18:07.201 sys 0m3.873s 00:18:07.201 13:28:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:07.201 13:28:24 -- common/autotest_common.sh@10 -- # set +x 00:18:07.201 ************************************ 00:18:07.201 END TEST nvmf_filesystem 00:18:07.201 ************************************ 00:18:07.459 13:28:24 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:18:07.459 13:28:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:07.459 13:28:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.459 13:28:24 -- common/autotest_common.sh@10 -- # set +x 00:18:07.459 ************************************ 00:18:07.459 START TEST nvmf_discovery 00:18:07.459 ************************************ 00:18:07.459 13:28:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:18:07.459 * Looking for test storage... 00:18:07.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.459 13:28:24 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.459 13:28:24 -- nvmf/common.sh@7 -- # uname -s 00:18:07.459 13:28:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.459 13:28:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.459 13:28:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.459 13:28:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.459 13:28:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.459 13:28:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.459 13:28:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.459 13:28:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.459 13:28:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.459 13:28:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.459 13:28:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:07.459 13:28:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:07.459 13:28:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.459 13:28:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.459 13:28:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.459 13:28:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.459 13:28:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.460 13:28:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.460 13:28:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.460 13:28:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.460 13:28:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.460 13:28:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.460 13:28:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.460 13:28:24 -- paths/export.sh@5 -- # export PATH 00:18:07.460 13:28:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.460 13:28:24 -- nvmf/common.sh@47 -- # : 0 00:18:07.460 13:28:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:07.460 13:28:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:07.460 13:28:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.460 13:28:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.460 13:28:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.460 13:28:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:07.460 13:28:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:07.460 13:28:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:07.460 13:28:24 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:18:07.460 13:28:24 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:18:07.460 13:28:24 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:18:07.460 13:28:24 -- target/discovery.sh@15 -- # hash nvme 00:18:07.460 13:28:24 -- target/discovery.sh@20 -- # nvmftestinit 00:18:07.460 13:28:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:07.460 13:28:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.460 13:28:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:07.460 13:28:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:07.460 13:28:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:07.460 13:28:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.460 13:28:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.460 13:28:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.460 13:28:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:07.460 13:28:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:07.460 13:28:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:07.460 13:28:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:07.460 13:28:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:07.460 13:28:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:07.460 13:28:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.460 13:28:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.460 13:28:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.460 13:28:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:07.460 13:28:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.460 13:28:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.460 13:28:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.460 13:28:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.460 13:28:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.460 13:28:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.460 13:28:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.460 13:28:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.460 13:28:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:07.460 13:28:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:07.460 Cannot find device "nvmf_tgt_br" 00:18:07.460 13:28:24 -- nvmf/common.sh@155 -- # true 00:18:07.460 13:28:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.460 Cannot find device "nvmf_tgt_br2" 00:18:07.460 13:28:24 -- nvmf/common.sh@156 -- # true 00:18:07.460 13:28:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:07.460 13:28:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:07.718 Cannot find device "nvmf_tgt_br" 00:18:07.718 13:28:24 -- nvmf/common.sh@158 -- # true 00:18:07.718 13:28:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:07.718 Cannot find device "nvmf_tgt_br2" 00:18:07.718 13:28:24 -- nvmf/common.sh@159 -- # true 00:18:07.718 13:28:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:07.718 13:28:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:07.718 13:28:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.718 13:28:24 -- nvmf/common.sh@162 -- # true 00:18:07.718 13:28:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.718 13:28:24 -- nvmf/common.sh@163 -- # true 00:18:07.718 13:28:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.718 13:28:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.718 13:28:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.718 13:28:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.718 13:28:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.718 13:28:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:07.718 13:28:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:07.718 13:28:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:07.718 13:28:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:07.718 13:28:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:07.719 13:28:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:07.719 13:28:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:07.719 13:28:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:07.719 13:28:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:07.719 13:28:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:07.719 13:28:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:07.719 13:28:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:07.719 13:28:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:07.719 13:28:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:07.719 13:28:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:07.976 13:28:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:07.976 13:28:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:07.976 13:28:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:07.976 13:28:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:07.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:18:07.977 00:18:07.977 --- 10.0.0.2 ping statistics --- 00:18:07.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.977 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:18:07.977 13:28:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:07.977 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:07.977 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:07.977 00:18:07.977 --- 10.0.0.3 ping statistics --- 00:18:07.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.977 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:07.977 13:28:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:07.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:07.977 00:18:07.977 --- 10.0.0.1 ping statistics --- 00:18:07.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.977 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:07.977 13:28:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.977 13:28:25 -- nvmf/common.sh@422 -- # return 0 00:18:07.977 13:28:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:07.977 13:28:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.977 13:28:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:07.977 13:28:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:07.977 13:28:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.977 13:28:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:07.977 13:28:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:07.977 13:28:25 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:18:07.977 13:28:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:07.977 13:28:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:07.977 13:28:25 -- common/autotest_common.sh@10 -- # set +x 00:18:07.977 13:28:25 -- nvmf/common.sh@470 -- # nvmfpid=66153 00:18:07.977 13:28:25 -- nvmf/common.sh@471 -- # waitforlisten 66153 00:18:07.977 13:28:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:07.977 13:28:25 -- common/autotest_common.sh@817 -- # '[' -z 66153 ']' 00:18:07.977 13:28:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.977 13:28:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:07.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.977 13:28:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.977 13:28:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:07.977 13:28:25 -- common/autotest_common.sh@10 -- # set +x 00:18:07.977 [2024-04-26 13:28:25.287826] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:07.977 [2024-04-26 13:28:25.287941] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.234 [2024-04-26 13:28:25.425593] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.234 [2024-04-26 13:28:25.542750] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.234 [2024-04-26 13:28:25.542839] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.234 [2024-04-26 13:28:25.542852] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.234 [2024-04-26 13:28:25.542861] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.234 [2024-04-26 13:28:25.542869] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.234 [2024-04-26 13:28:25.543049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.235 [2024-04-26 13:28:25.543525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.235 [2024-04-26 13:28:25.544200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.235 [2024-04-26 13:28:25.544209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.171 13:28:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.171 13:28:26 -- common/autotest_common.sh@850 -- # return 0 00:18:09.171 13:28:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:09.171 13:28:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.171 13:28:26 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 [2024-04-26 13:28:26.299422] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@26 -- # seq 1 4 00:18:09.171 13:28:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:09.171 13:28:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 Null1 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 [2024-04-26 13:28:26.360322] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:09.171 13:28:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 Null2 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:09.171 13:28:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 Null3 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:18:09.171 13:28:26 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 Null4 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:18:09.171 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.171 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.171 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.171 13:28:26 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 4420 00:18:09.171 00:18:09.171 Discovery Log Number of Records 6, Generation counter 6 00:18:09.171 =====Discovery Log Entry 0====== 00:18:09.171 trtype: tcp 00:18:09.171 adrfam: ipv4 00:18:09.171 subtype: current discovery subsystem 00:18:09.171 treq: not required 00:18:09.171 portid: 0 00:18:09.171 trsvcid: 4420 00:18:09.171 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:09.171 traddr: 10.0.0.2 00:18:09.171 eflags: explicit discovery connections, duplicate discovery information 00:18:09.171 sectype: none 00:18:09.171 =====Discovery Log Entry 1====== 00:18:09.171 trtype: tcp 00:18:09.171 adrfam: ipv4 00:18:09.171 subtype: nvme subsystem 00:18:09.171 treq: not required 00:18:09.171 portid: 0 00:18:09.171 trsvcid: 4420 00:18:09.171 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:09.171 traddr: 10.0.0.2 00:18:09.171 eflags: none 00:18:09.171 sectype: none 00:18:09.171 =====Discovery Log Entry 2====== 00:18:09.171 trtype: tcp 00:18:09.171 adrfam: ipv4 00:18:09.171 subtype: nvme subsystem 00:18:09.171 treq: not required 00:18:09.171 portid: 0 00:18:09.171 trsvcid: 4420 00:18:09.171 subnqn: nqn.2016-06.io.spdk:cnode2 00:18:09.171 traddr: 10.0.0.2 00:18:09.171 eflags: none 00:18:09.171 sectype: none 00:18:09.171 =====Discovery Log Entry 3====== 00:18:09.171 trtype: tcp 00:18:09.171 adrfam: ipv4 00:18:09.171 subtype: nvme subsystem 00:18:09.171 treq: not required 00:18:09.171 portid: 0 00:18:09.171 trsvcid: 4420 00:18:09.171 subnqn: nqn.2016-06.io.spdk:cnode3 00:18:09.171 traddr: 10.0.0.2 00:18:09.171 eflags: none 00:18:09.171 sectype: none 00:18:09.171 =====Discovery Log Entry 4====== 00:18:09.171 trtype: tcp 00:18:09.171 adrfam: ipv4 00:18:09.171 subtype: nvme subsystem 00:18:09.171 treq: not required 00:18:09.171 portid: 0 00:18:09.171 trsvcid: 4420 00:18:09.171 subnqn: nqn.2016-06.io.spdk:cnode4 00:18:09.171 traddr: 10.0.0.2 00:18:09.171 eflags: none 00:18:09.171 sectype: none 00:18:09.171 =====Discovery Log Entry 5====== 00:18:09.171 trtype: tcp 00:18:09.171 adrfam: ipv4 00:18:09.171 subtype: discovery subsystem referral 00:18:09.171 treq: not required 00:18:09.171 portid: 0 00:18:09.171 trsvcid: 4430 00:18:09.171 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:09.172 traddr: 10.0.0.2 00:18:09.172 eflags: none 00:18:09.172 sectype: none 00:18:09.172 Perform nvmf subsystem discovery via RPC 00:18:09.172 13:28:26 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:18:09.172 13:28:26 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:18:09.172 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.172 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.172 [2024-04-26 13:28:26.552483] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:18:09.172 [ 00:18:09.172 { 00:18:09.172 "allow_any_host": true, 00:18:09.172 "hosts": [], 00:18:09.172 "listen_addresses": [ 00:18:09.172 { 00:18:09.172 "adrfam": "IPv4", 00:18:09.172 "traddr": "10.0.0.2", 00:18:09.172 "transport": "TCP", 00:18:09.172 "trsvcid": "4420", 00:18:09.172 "trtype": "TCP" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:09.172 "subtype": "Discovery" 00:18:09.172 }, 00:18:09.172 { 00:18:09.172 "allow_any_host": true, 00:18:09.172 "hosts": [], 00:18:09.172 "listen_addresses": [ 00:18:09.172 { 00:18:09.172 "adrfam": "IPv4", 00:18:09.172 "traddr": "10.0.0.2", 00:18:09.172 "transport": "TCP", 00:18:09.172 "trsvcid": "4420", 00:18:09.172 "trtype": "TCP" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "max_cntlid": 65519, 00:18:09.172 "max_namespaces": 32, 00:18:09.172 "min_cntlid": 1, 00:18:09.172 "model_number": "SPDK bdev Controller", 00:18:09.172 "namespaces": [ 00:18:09.172 { 00:18:09.172 "bdev_name": "Null1", 00:18:09.172 "name": "Null1", 00:18:09.172 "nguid": "FF5232B42A7849C3A07F29550041ADEC", 00:18:09.172 "nsid": 1, 00:18:09.172 "uuid": "ff5232b4-2a78-49c3-a07f-29550041adec" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.172 "serial_number": "SPDK00000000000001", 00:18:09.172 "subtype": "NVMe" 00:18:09.172 }, 00:18:09.172 { 00:18:09.172 "allow_any_host": true, 00:18:09.172 "hosts": [], 00:18:09.172 "listen_addresses": [ 00:18:09.172 { 00:18:09.172 "adrfam": "IPv4", 00:18:09.172 "traddr": "10.0.0.2", 00:18:09.172 "transport": "TCP", 00:18:09.172 "trsvcid": "4420", 00:18:09.172 "trtype": "TCP" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "max_cntlid": 65519, 00:18:09.172 "max_namespaces": 32, 00:18:09.172 "min_cntlid": 1, 00:18:09.172 "model_number": "SPDK bdev Controller", 00:18:09.172 "namespaces": [ 00:18:09.172 { 00:18:09.172 "bdev_name": "Null2", 00:18:09.172 "name": "Null2", 00:18:09.172 "nguid": "2A4DDFBFF8E14797B6F40BCA26349DF6", 00:18:09.172 "nsid": 1, 00:18:09.172 "uuid": "2a4ddfbf-f8e1-4797-b6f4-0bca26349df6" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:09.172 "serial_number": "SPDK00000000000002", 00:18:09.172 "subtype": "NVMe" 00:18:09.172 }, 00:18:09.172 { 00:18:09.172 "allow_any_host": true, 00:18:09.172 "hosts": [], 00:18:09.172 "listen_addresses": [ 00:18:09.172 { 00:18:09.172 "adrfam": "IPv4", 00:18:09.172 "traddr": "10.0.0.2", 00:18:09.172 "transport": "TCP", 00:18:09.172 "trsvcid": "4420", 00:18:09.172 "trtype": "TCP" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "max_cntlid": 65519, 00:18:09.172 "max_namespaces": 32, 00:18:09.172 "min_cntlid": 1, 00:18:09.172 "model_number": "SPDK bdev Controller", 00:18:09.172 "namespaces": [ 00:18:09.172 { 00:18:09.172 "bdev_name": "Null3", 00:18:09.172 "name": "Null3", 00:18:09.172 "nguid": "AA3A12A7C5D04E70A81F014382F4C332", 00:18:09.172 "nsid": 1, 00:18:09.172 "uuid": "aa3a12a7-c5d0-4e70-a81f-014382f4c332" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:18:09.172 "serial_number": "SPDK00000000000003", 00:18:09.172 "subtype": "NVMe" 00:18:09.172 }, 00:18:09.172 { 00:18:09.172 "allow_any_host": true, 00:18:09.172 "hosts": [], 00:18:09.172 "listen_addresses": [ 00:18:09.172 { 00:18:09.172 "adrfam": "IPv4", 00:18:09.172 "traddr": "10.0.0.2", 00:18:09.172 "transport": "TCP", 00:18:09.172 "trsvcid": "4420", 00:18:09.172 "trtype": "TCP" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "max_cntlid": 65519, 00:18:09.172 "max_namespaces": 32, 00:18:09.172 "min_cntlid": 1, 00:18:09.172 "model_number": "SPDK bdev Controller", 00:18:09.172 "namespaces": [ 00:18:09.172 { 00:18:09.172 "bdev_name": "Null4", 00:18:09.172 "name": "Null4", 00:18:09.172 "nguid": "53C60E4361B04C3881EE02C82E171C66", 00:18:09.172 "nsid": 1, 00:18:09.172 "uuid": "53c60e43-61b0-4c38-81ee-02c82e171c66" 00:18:09.172 } 00:18:09.172 ], 00:18:09.172 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:18:09.172 "serial_number": "SPDK00000000000004", 00:18:09.172 "subtype": "NVMe" 00:18:09.172 } 00:18:09.172 ] 00:18:09.172 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.172 13:28:26 -- target/discovery.sh@42 -- # seq 1 4 00:18:09.172 13:28:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:09.172 13:28:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.172 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.172 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.172 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.172 13:28:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:18:09.172 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.172 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.172 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.172 13:28:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:09.172 13:28:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:09.172 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.172 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.172 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.172 13:28:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:18:09.172 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.172 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.431 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.431 13:28:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:09.431 13:28:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:09.431 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.431 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.431 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.431 13:28:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:18:09.431 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.431 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.431 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.431 13:28:26 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:18:09.431 13:28:26 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:09.431 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.431 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.431 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.431 13:28:26 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:18:09.431 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.431 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.431 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.431 13:28:26 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:18:09.431 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.431 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.431 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.431 13:28:26 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:18:09.431 13:28:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:09.431 13:28:26 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:18:09.431 13:28:26 -- common/autotest_common.sh@10 -- # set +x 00:18:09.431 13:28:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:09.431 13:28:26 -- target/discovery.sh@49 -- # check_bdevs= 00:18:09.431 13:28:26 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:18:09.431 13:28:26 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:18:09.431 13:28:26 -- target/discovery.sh@57 -- # nvmftestfini 00:18:09.431 13:28:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:09.431 13:28:26 -- nvmf/common.sh@117 -- # sync 00:18:09.431 13:28:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.431 13:28:26 -- nvmf/common.sh@120 -- # set +e 00:18:09.431 13:28:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.431 13:28:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.431 rmmod nvme_tcp 00:18:09.431 rmmod nvme_fabrics 00:18:09.431 rmmod nvme_keyring 00:18:09.431 13:28:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.431 13:28:26 -- nvmf/common.sh@124 -- # set -e 00:18:09.431 13:28:26 -- nvmf/common.sh@125 -- # return 0 00:18:09.431 13:28:26 -- nvmf/common.sh@478 -- # '[' -n 66153 ']' 00:18:09.431 13:28:26 -- nvmf/common.sh@479 -- # killprocess 66153 00:18:09.431 13:28:26 -- common/autotest_common.sh@936 -- # '[' -z 66153 ']' 00:18:09.431 13:28:26 -- common/autotest_common.sh@940 -- # kill -0 66153 00:18:09.431 13:28:26 -- common/autotest_common.sh@941 -- # uname 00:18:09.431 13:28:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.432 13:28:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66153 00:18:09.432 killing process with pid 66153 00:18:09.432 13:28:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:09.432 13:28:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:09.432 13:28:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66153' 00:18:09.432 13:28:26 -- common/autotest_common.sh@955 -- # kill 66153 00:18:09.432 [2024-04-26 13:28:26.809121] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:18:09.432 13:28:26 -- common/autotest_common.sh@960 -- # wait 66153 00:18:09.690 13:28:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:09.690 13:28:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:09.690 13:28:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:09.690 13:28:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.690 13:28:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.690 13:28:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.690 13:28:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.690 13:28:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.690 13:28:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:09.690 ************************************ 00:18:09.690 END TEST nvmf_discovery 00:18:09.690 ************************************ 00:18:09.690 00:18:09.690 real 0m2.352s 00:18:09.690 user 0m6.224s 00:18:09.690 sys 0m0.584s 00:18:09.690 13:28:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:09.690 13:28:27 -- common/autotest_common.sh@10 -- # set +x 00:18:09.950 13:28:27 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:18:09.950 13:28:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:09.950 13:28:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:09.950 13:28:27 -- common/autotest_common.sh@10 -- # set +x 00:18:09.950 ************************************ 00:18:09.950 START TEST nvmf_referrals 00:18:09.950 ************************************ 00:18:09.950 13:28:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:18:09.950 * Looking for test storage... 00:18:09.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:09.950 13:28:27 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.950 13:28:27 -- nvmf/common.sh@7 -- # uname -s 00:18:09.950 13:28:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.950 13:28:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.950 13:28:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.950 13:28:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.950 13:28:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.950 13:28:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.950 13:28:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.950 13:28:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.950 13:28:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.950 13:28:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.950 13:28:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:09.950 13:28:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:09.950 13:28:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.950 13:28:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.950 13:28:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.950 13:28:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.950 13:28:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.950 13:28:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.950 13:28:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.950 13:28:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.950 13:28:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.950 13:28:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.950 13:28:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.950 13:28:27 -- paths/export.sh@5 -- # export PATH 00:18:09.950 13:28:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.950 13:28:27 -- nvmf/common.sh@47 -- # : 0 00:18:09.950 13:28:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.950 13:28:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.950 13:28:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.950 13:28:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.950 13:28:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.950 13:28:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.950 13:28:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.950 13:28:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.950 13:28:27 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:18:09.950 13:28:27 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:18:09.950 13:28:27 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:18:09.950 13:28:27 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:18:09.950 13:28:27 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:09.950 13:28:27 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:09.950 13:28:27 -- target/referrals.sh@37 -- # nvmftestinit 00:18:09.950 13:28:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:09.950 13:28:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.950 13:28:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:09.950 13:28:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:09.950 13:28:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:09.950 13:28:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.950 13:28:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.950 13:28:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.950 13:28:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:09.950 13:28:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:09.950 13:28:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:09.950 13:28:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:09.950 13:28:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:09.950 13:28:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:09.950 13:28:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.950 13:28:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.950 13:28:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:09.950 13:28:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:09.950 13:28:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.950 13:28:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.950 13:28:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.950 13:28:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.950 13:28:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.950 13:28:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.950 13:28:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.950 13:28:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.950 13:28:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:09.950 13:28:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:09.950 Cannot find device "nvmf_tgt_br" 00:18:09.950 13:28:27 -- nvmf/common.sh@155 -- # true 00:18:09.950 13:28:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.950 Cannot find device "nvmf_tgt_br2" 00:18:09.950 13:28:27 -- nvmf/common.sh@156 -- # true 00:18:09.950 13:28:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:09.950 13:28:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:09.950 Cannot find device "nvmf_tgt_br" 00:18:09.950 13:28:27 -- nvmf/common.sh@158 -- # true 00:18:09.950 13:28:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:10.209 Cannot find device "nvmf_tgt_br2" 00:18:10.209 13:28:27 -- nvmf/common.sh@159 -- # true 00:18:10.209 13:28:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:10.209 13:28:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:10.210 13:28:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.210 13:28:27 -- nvmf/common.sh@162 -- # true 00:18:10.210 13:28:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.210 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.210 13:28:27 -- nvmf/common.sh@163 -- # true 00:18:10.210 13:28:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.210 13:28:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.210 13:28:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.210 13:28:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.210 13:28:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.210 13:28:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.210 13:28:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.210 13:28:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:10.210 13:28:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:10.210 13:28:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:10.210 13:28:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:10.210 13:28:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:10.210 13:28:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:10.210 13:28:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.210 13:28:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.210 13:28:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.210 13:28:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:10.210 13:28:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:10.210 13:28:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.210 13:28:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.210 13:28:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.210 13:28:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.210 13:28:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.210 13:28:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:10.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:10.210 00:18:10.210 --- 10.0.0.2 ping statistics --- 00:18:10.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.210 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:10.210 13:28:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:10.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:18:10.210 00:18:10.210 --- 10.0.0.3 ping statistics --- 00:18:10.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.210 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:10.210 13:28:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:10.210 00:18:10.210 --- 10.0.0.1 ping statistics --- 00:18:10.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.210 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:10.210 13:28:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.210 13:28:27 -- nvmf/common.sh@422 -- # return 0 00:18:10.210 13:28:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:10.210 13:28:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.210 13:28:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:10.210 13:28:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:10.210 13:28:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.210 13:28:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:10.210 13:28:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:10.210 13:28:27 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:18:10.210 13:28:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:10.210 13:28:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:10.210 13:28:27 -- common/autotest_common.sh@10 -- # set +x 00:18:10.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.210 13:28:27 -- nvmf/common.sh@470 -- # nvmfpid=66380 00:18:10.210 13:28:27 -- nvmf/common.sh@471 -- # waitforlisten 66380 00:18:10.210 13:28:27 -- common/autotest_common.sh@817 -- # '[' -z 66380 ']' 00:18:10.210 13:28:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.210 13:28:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.210 13:28:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:10.210 13:28:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.210 13:28:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:10.210 13:28:27 -- common/autotest_common.sh@10 -- # set +x 00:18:10.469 [2024-04-26 13:28:27.718379] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:10.469 [2024-04-26 13:28:27.718490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.469 [2024-04-26 13:28:27.860605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.728 [2024-04-26 13:28:27.980588] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.728 [2024-04-26 13:28:27.980652] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.728 [2024-04-26 13:28:27.980664] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.728 [2024-04-26 13:28:27.980672] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.728 [2024-04-26 13:28:27.980678] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.728 [2024-04-26 13:28:27.980847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.728 [2024-04-26 13:28:27.981181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.728 [2024-04-26 13:28:27.981704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.728 [2024-04-26 13:28:27.981664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.664 13:28:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:11.664 13:28:28 -- common/autotest_common.sh@850 -- # return 0 00:18:11.664 13:28:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:11.664 13:28:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.664 13:28:28 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.664 13:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 [2024-04-26 13:28:28.796488] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.664 13:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:18:11.664 13:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 [2024-04-26 13:28:28.817065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:11.664 13:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:18:11.664 13:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:18:11.664 13:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:18:11.664 13:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:11.664 13:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:28 -- target/referrals.sh@48 -- # jq length 00:18:11.664 13:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:18:11.664 13:28:28 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:18:11.664 13:28:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:18:11.664 13:28:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:11.664 13:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:28 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:18:11.664 13:28:28 -- target/referrals.sh@21 -- # sort 00:18:11.664 13:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:18:11.664 13:28:28 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:18:11.664 13:28:28 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:18:11.664 13:28:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:11.664 13:28:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:11.664 13:28:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:11.664 13:28:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:11.664 13:28:28 -- target/referrals.sh@26 -- # sort 00:18:11.664 13:28:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:18:11.664 13:28:29 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:18:11.664 13:28:29 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:18:11.664 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:29 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:18:11.664 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:29 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:18:11.664 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.664 13:28:29 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:11.664 13:28:29 -- target/referrals.sh@56 -- # jq length 00:18:11.664 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.664 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.922 13:28:29 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:18:11.922 13:28:29 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:18:11.922 13:28:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:11.922 13:28:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:11.922 13:28:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:11.922 13:28:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:11.922 13:28:29 -- target/referrals.sh@26 -- # sort 00:18:11.922 13:28:29 -- target/referrals.sh@26 -- # echo 00:18:11.922 13:28:29 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:18:11.922 13:28:29 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:18:11.922 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.922 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.922 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.922 13:28:29 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:18:11.922 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.922 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.922 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.922 13:28:29 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:18:11.922 13:28:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:18:11.922 13:28:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:11.922 13:28:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:18:11.923 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:11.923 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:11.923 13:28:29 -- target/referrals.sh@21 -- # sort 00:18:11.923 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:11.923 13:28:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:18:11.923 13:28:29 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:18:11.923 13:28:29 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:18:11.923 13:28:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:11.923 13:28:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:11.923 13:28:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:11.923 13:28:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:11.923 13:28:29 -- target/referrals.sh@26 -- # sort 00:18:11.923 13:28:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:18:11.923 13:28:29 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:18:11.923 13:28:29 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:18:11.923 13:28:29 -- target/referrals.sh@67 -- # jq -r .subnqn 00:18:11.923 13:28:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:18:11.923 13:28:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:11.923 13:28:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:18:12.181 13:28:29 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:18:12.181 13:28:29 -- target/referrals.sh@68 -- # jq -r .subnqn 00:18:12.181 13:28:29 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:18:12.181 13:28:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:18:12.181 13:28:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:12.181 13:28:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:18:12.181 13:28:29 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:18:12.181 13:28:29 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:18:12.181 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.181 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:12.181 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.181 13:28:29 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:18:12.181 13:28:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:18:12.181 13:28:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:12.181 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.181 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:12.181 13:28:29 -- target/referrals.sh@21 -- # sort 00:18:12.181 13:28:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:18:12.181 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.181 13:28:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:18:12.181 13:28:29 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:18:12.181 13:28:29 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:18:12.181 13:28:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:12.181 13:28:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:12.181 13:28:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:12.181 13:28:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:12.181 13:28:29 -- target/referrals.sh@26 -- # sort 00:18:12.440 13:28:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:18:12.440 13:28:29 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:18:12.440 13:28:29 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:18:12.440 13:28:29 -- target/referrals.sh@75 -- # jq -r .subnqn 00:18:12.440 13:28:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:18:12.440 13:28:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:12.440 13:28:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:18:12.440 13:28:29 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:18:12.440 13:28:29 -- target/referrals.sh@76 -- # jq -r .subnqn 00:18:12.440 13:28:29 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:18:12.440 13:28:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:18:12.440 13:28:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:12.440 13:28:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:18:12.440 13:28:29 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:18:12.440 13:28:29 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:18:12.440 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.440 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:12.440 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.440 13:28:29 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:18:12.440 13:28:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.440 13:28:29 -- target/referrals.sh@82 -- # jq length 00:18:12.440 13:28:29 -- common/autotest_common.sh@10 -- # set +x 00:18:12.440 13:28:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.440 13:28:29 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:18:12.440 13:28:29 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:18:12.440 13:28:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:18:12.440 13:28:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:18:12.440 13:28:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -a 10.0.0.2 -s 8009 -o json 00:18:12.440 13:28:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:18:12.440 13:28:29 -- target/referrals.sh@26 -- # sort 00:18:12.699 13:28:29 -- target/referrals.sh@26 -- # echo 00:18:12.699 13:28:29 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:18:12.699 13:28:29 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:18:12.699 13:28:29 -- target/referrals.sh@86 -- # nvmftestfini 00:18:12.699 13:28:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:12.699 13:28:29 -- nvmf/common.sh@117 -- # sync 00:18:12.699 13:28:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.699 13:28:29 -- nvmf/common.sh@120 -- # set +e 00:18:12.699 13:28:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.699 13:28:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.699 rmmod nvme_tcp 00:18:12.699 rmmod nvme_fabrics 00:18:12.699 rmmod nvme_keyring 00:18:12.699 13:28:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.699 13:28:30 -- nvmf/common.sh@124 -- # set -e 00:18:12.699 13:28:30 -- nvmf/common.sh@125 -- # return 0 00:18:12.699 13:28:30 -- nvmf/common.sh@478 -- # '[' -n 66380 ']' 00:18:12.699 13:28:30 -- nvmf/common.sh@479 -- # killprocess 66380 00:18:12.699 13:28:30 -- common/autotest_common.sh@936 -- # '[' -z 66380 ']' 00:18:12.699 13:28:30 -- common/autotest_common.sh@940 -- # kill -0 66380 00:18:12.699 13:28:30 -- common/autotest_common.sh@941 -- # uname 00:18:12.699 13:28:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.699 13:28:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66380 00:18:12.699 13:28:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:12.699 13:28:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:12.699 killing process with pid 66380 00:18:12.699 13:28:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66380' 00:18:12.699 13:28:30 -- common/autotest_common.sh@955 -- # kill 66380 00:18:12.699 13:28:30 -- common/autotest_common.sh@960 -- # wait 66380 00:18:13.266 13:28:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:13.266 13:28:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:13.266 13:28:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:13.266 13:28:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.266 13:28:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.266 13:28:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.266 13:28:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.266 13:28:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.266 13:28:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:13.266 00:18:13.266 real 0m3.267s 00:18:13.266 user 0m10.410s 00:18:13.266 sys 0m0.837s 00:18:13.266 13:28:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:13.266 ************************************ 00:18:13.266 13:28:30 -- common/autotest_common.sh@10 -- # set +x 00:18:13.266 END TEST nvmf_referrals 00:18:13.266 ************************************ 00:18:13.266 13:28:30 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:18:13.266 13:28:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:13.266 13:28:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.266 13:28:30 -- common/autotest_common.sh@10 -- # set +x 00:18:13.266 ************************************ 00:18:13.266 START TEST nvmf_connect_disconnect 00:18:13.266 ************************************ 00:18:13.266 13:28:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:18:13.266 * Looking for test storage... 00:18:13.266 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:13.266 13:28:30 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.266 13:28:30 -- nvmf/common.sh@7 -- # uname -s 00:18:13.266 13:28:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.266 13:28:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.266 13:28:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.266 13:28:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.266 13:28:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.266 13:28:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.266 13:28:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.266 13:28:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.266 13:28:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.266 13:28:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.266 13:28:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:13.266 13:28:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:13.266 13:28:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.266 13:28:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.266 13:28:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.266 13:28:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.266 13:28:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.266 13:28:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.266 13:28:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.266 13:28:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.266 13:28:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.266 13:28:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.266 13:28:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.266 13:28:30 -- paths/export.sh@5 -- # export PATH 00:18:13.266 13:28:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.266 13:28:30 -- nvmf/common.sh@47 -- # : 0 00:18:13.266 13:28:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.266 13:28:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.266 13:28:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.266 13:28:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.266 13:28:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.266 13:28:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.266 13:28:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.266 13:28:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.267 13:28:30 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:13.267 13:28:30 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:13.267 13:28:30 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:18:13.267 13:28:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:13.267 13:28:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.267 13:28:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:13.267 13:28:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:13.267 13:28:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:13.267 13:28:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.267 13:28:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.267 13:28:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.524 13:28:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:13.524 13:28:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:13.524 13:28:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:13.524 13:28:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:13.524 13:28:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:13.524 13:28:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:13.524 13:28:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.524 13:28:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.524 13:28:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:13.524 13:28:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:13.524 13:28:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.524 13:28:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.524 13:28:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.524 13:28:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.524 13:28:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.524 13:28:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.524 13:28:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.524 13:28:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.524 13:28:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:13.524 13:28:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:13.524 Cannot find device "nvmf_tgt_br" 00:18:13.524 13:28:30 -- nvmf/common.sh@155 -- # true 00:18:13.524 13:28:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.524 Cannot find device "nvmf_tgt_br2" 00:18:13.524 13:28:30 -- nvmf/common.sh@156 -- # true 00:18:13.524 13:28:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:13.524 13:28:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:13.524 Cannot find device "nvmf_tgt_br" 00:18:13.524 13:28:30 -- nvmf/common.sh@158 -- # true 00:18:13.524 13:28:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:13.524 Cannot find device "nvmf_tgt_br2" 00:18:13.524 13:28:30 -- nvmf/common.sh@159 -- # true 00:18:13.524 13:28:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:13.524 13:28:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:13.524 13:28:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.524 13:28:30 -- nvmf/common.sh@162 -- # true 00:18:13.524 13:28:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.524 13:28:30 -- nvmf/common.sh@163 -- # true 00:18:13.524 13:28:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.524 13:28:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.524 13:28:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.524 13:28:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.524 13:28:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.524 13:28:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.524 13:28:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.524 13:28:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:13.524 13:28:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:13.524 13:28:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:13.524 13:28:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:13.524 13:28:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:13.524 13:28:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:13.524 13:28:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.524 13:28:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.524 13:28:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.524 13:28:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:13.783 13:28:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:13.783 13:28:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.783 13:28:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.783 13:28:31 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.783 13:28:31 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.783 13:28:31 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.783 13:28:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:13.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:18:13.783 00:18:13.783 --- 10.0.0.2 ping statistics --- 00:18:13.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.783 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:13.783 13:28:31 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:13.783 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.783 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:18:13.783 00:18:13.783 --- 10.0.0.3 ping statistics --- 00:18:13.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.783 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:13.783 13:28:31 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:13.783 00:18:13.783 --- 10.0.0.1 ping statistics --- 00:18:13.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.783 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:13.783 13:28:31 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.783 13:28:31 -- nvmf/common.sh@422 -- # return 0 00:18:13.783 13:28:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:13.783 13:28:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.783 13:28:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:13.783 13:28:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:13.783 13:28:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.783 13:28:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:13.783 13:28:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:13.783 13:28:31 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:18:13.783 13:28:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:13.783 13:28:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:13.783 13:28:31 -- common/autotest_common.sh@10 -- # set +x 00:18:13.783 13:28:31 -- nvmf/common.sh@470 -- # nvmfpid=66695 00:18:13.783 13:28:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:13.783 13:28:31 -- nvmf/common.sh@471 -- # waitforlisten 66695 00:18:13.783 13:28:31 -- common/autotest_common.sh@817 -- # '[' -z 66695 ']' 00:18:13.783 13:28:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.783 13:28:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.783 13:28:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.783 13:28:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.783 13:28:31 -- common/autotest_common.sh@10 -- # set +x 00:18:13.783 [2024-04-26 13:28:31.172678] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:13.783 [2024-04-26 13:28:31.172882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.046 [2024-04-26 13:28:31.329423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.305 [2024-04-26 13:28:31.532183] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.305 [2024-04-26 13:28:31.532304] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.305 [2024-04-26 13:28:31.532354] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.305 [2024-04-26 13:28:31.532380] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.305 [2024-04-26 13:28:31.532397] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.305 [2024-04-26 13:28:31.532612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.305 [2024-04-26 13:28:31.533222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.305 [2024-04-26 13:28:31.534286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:14.305 [2024-04-26 13:28:31.534298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.869 13:28:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:14.869 13:28:32 -- common/autotest_common.sh@850 -- # return 0 00:18:14.869 13:28:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:14.869 13:28:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:14.869 13:28:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.869 13:28:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:18:14.869 13:28:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.869 13:28:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.869 [2024-04-26 13:28:32.135342] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.869 13:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:18:14.869 13:28:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.869 13:28:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.869 13:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:14.869 13:28:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.869 13:28:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.869 13:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.869 13:28:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.869 13:28:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.869 13:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.869 13:28:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:14.869 13:28:32 -- common/autotest_common.sh@10 -- # set +x 00:18:14.869 [2024-04-26 13:28:32.205495] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.869 13:28:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:18:14.869 13:28:32 -- target/connect_disconnect.sh@34 -- # set +x 00:18:17.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:19.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:24.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:26.279 13:28:43 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:26.279 13:28:43 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:26.279 13:28:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:26.279 13:28:43 -- nvmf/common.sh@117 -- # sync 00:18:26.279 13:28:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.279 13:28:43 -- nvmf/common.sh@120 -- # set +e 00:18:26.279 13:28:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.279 13:28:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.279 rmmod nvme_tcp 00:18:26.279 rmmod nvme_fabrics 00:18:26.279 rmmod nvme_keyring 00:18:26.279 13:28:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.279 13:28:43 -- nvmf/common.sh@124 -- # set -e 00:18:26.279 13:28:43 -- nvmf/common.sh@125 -- # return 0 00:18:26.279 13:28:43 -- nvmf/common.sh@478 -- # '[' -n 66695 ']' 00:18:26.279 13:28:43 -- nvmf/common.sh@479 -- # killprocess 66695 00:18:26.279 13:28:43 -- common/autotest_common.sh@936 -- # '[' -z 66695 ']' 00:18:26.279 13:28:43 -- common/autotest_common.sh@940 -- # kill -0 66695 00:18:26.279 13:28:43 -- common/autotest_common.sh@941 -- # uname 00:18:26.279 13:28:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:26.279 13:28:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66695 00:18:26.279 13:28:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:26.279 13:28:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:26.279 13:28:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66695' 00:18:26.279 killing process with pid 66695 00:18:26.279 13:28:43 -- common/autotest_common.sh@955 -- # kill 66695 00:18:26.279 13:28:43 -- common/autotest_common.sh@960 -- # wait 66695 00:18:26.537 13:28:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:26.537 13:28:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:26.537 13:28:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:26.537 13:28:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.537 13:28:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.537 13:28:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.537 13:28:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.537 13:28:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.537 13:28:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:26.537 00:18:26.537 real 0m13.313s 00:18:26.537 user 0m48.108s 00:18:26.537 sys 0m2.085s 00:18:26.537 13:28:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:26.537 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:18:26.537 ************************************ 00:18:26.537 END TEST nvmf_connect_disconnect 00:18:26.537 ************************************ 00:18:26.537 13:28:43 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:26.537 13:28:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:26.537 13:28:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:26.537 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:18:26.795 ************************************ 00:18:26.795 START TEST nvmf_multitarget 00:18:26.795 ************************************ 00:18:26.795 13:28:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:26.795 * Looking for test storage... 00:18:26.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:26.795 13:28:44 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.795 13:28:44 -- nvmf/common.sh@7 -- # uname -s 00:18:26.795 13:28:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.795 13:28:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.795 13:28:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.795 13:28:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.795 13:28:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.795 13:28:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.795 13:28:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.795 13:28:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.795 13:28:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.795 13:28:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.795 13:28:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:26.795 13:28:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:26.795 13:28:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.795 13:28:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.795 13:28:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.795 13:28:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.795 13:28:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.795 13:28:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.795 13:28:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.795 13:28:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.795 13:28:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.795 13:28:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.795 13:28:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.795 13:28:44 -- paths/export.sh@5 -- # export PATH 00:18:26.795 13:28:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.795 13:28:44 -- nvmf/common.sh@47 -- # : 0 00:18:26.795 13:28:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:26.795 13:28:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:26.795 13:28:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.795 13:28:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.795 13:28:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.795 13:28:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:26.795 13:28:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:26.795 13:28:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:26.795 13:28:44 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:26.795 13:28:44 -- target/multitarget.sh@15 -- # nvmftestinit 00:18:26.795 13:28:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:26.795 13:28:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.795 13:28:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:26.795 13:28:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:26.795 13:28:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:26.795 13:28:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.795 13:28:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.795 13:28:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.795 13:28:44 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:26.796 13:28:44 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:26.796 13:28:44 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:26.796 13:28:44 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:26.796 13:28:44 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:26.796 13:28:44 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:26.796 13:28:44 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.796 13:28:44 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.796 13:28:44 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:26.796 13:28:44 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:26.796 13:28:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.796 13:28:44 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.796 13:28:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.796 13:28:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.796 13:28:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.796 13:28:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.796 13:28:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.796 13:28:44 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.796 13:28:44 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:26.796 13:28:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:26.796 Cannot find device "nvmf_tgt_br" 00:18:26.796 13:28:44 -- nvmf/common.sh@155 -- # true 00:18:26.796 13:28:44 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.796 Cannot find device "nvmf_tgt_br2" 00:18:26.796 13:28:44 -- nvmf/common.sh@156 -- # true 00:18:26.796 13:28:44 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:26.796 13:28:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:26.796 Cannot find device "nvmf_tgt_br" 00:18:26.796 13:28:44 -- nvmf/common.sh@158 -- # true 00:18:26.796 13:28:44 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:26.796 Cannot find device "nvmf_tgt_br2" 00:18:26.796 13:28:44 -- nvmf/common.sh@159 -- # true 00:18:26.796 13:28:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:27.054 13:28:44 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:27.054 13:28:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:27.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:27.054 13:28:44 -- nvmf/common.sh@162 -- # true 00:18:27.054 13:28:44 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:27.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:27.054 13:28:44 -- nvmf/common.sh@163 -- # true 00:18:27.054 13:28:44 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:27.054 13:28:44 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:27.054 13:28:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:27.054 13:28:44 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:27.054 13:28:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:27.054 13:28:44 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:27.054 13:28:44 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:27.054 13:28:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:27.054 13:28:44 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:27.054 13:28:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:27.054 13:28:44 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:27.054 13:28:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:27.054 13:28:44 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:27.054 13:28:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:27.054 13:28:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:27.054 13:28:44 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:27.054 13:28:44 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:27.054 13:28:44 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:27.054 13:28:44 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:27.054 13:28:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:27.054 13:28:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:27.054 13:28:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:27.313 13:28:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:27.313 13:28:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:27.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:18:27.313 00:18:27.313 --- 10.0.0.2 ping statistics --- 00:18:27.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.313 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:27.313 13:28:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:27.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:27.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:27.313 00:18:27.313 --- 10.0.0.3 ping statistics --- 00:18:27.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.313 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:27.313 13:28:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:27.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:18:27.313 00:18:27.313 --- 10.0.0.1 ping statistics --- 00:18:27.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.313 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:18:27.313 13:28:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.313 13:28:44 -- nvmf/common.sh@422 -- # return 0 00:18:27.313 13:28:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:27.313 13:28:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.313 13:28:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:27.313 13:28:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:27.313 13:28:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.313 13:28:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:27.313 13:28:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:27.313 13:28:44 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:27.313 13:28:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:27.313 13:28:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:27.313 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:18:27.313 13:28:44 -- nvmf/common.sh@470 -- # nvmfpid=67104 00:18:27.313 13:28:44 -- nvmf/common.sh@471 -- # waitforlisten 67104 00:18:27.313 13:28:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:27.313 13:28:44 -- common/autotest_common.sh@817 -- # '[' -z 67104 ']' 00:18:27.313 13:28:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.313 13:28:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:27.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.313 13:28:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.313 13:28:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:27.313 13:28:44 -- common/autotest_common.sh@10 -- # set +x 00:18:27.313 [2024-04-26 13:28:44.608514] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:27.313 [2024-04-26 13:28:44.608656] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.313 [2024-04-26 13:28:44.752916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:27.572 [2024-04-26 13:28:44.879627] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.572 [2024-04-26 13:28:44.879697] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.572 [2024-04-26 13:28:44.879721] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.572 [2024-04-26 13:28:44.879733] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.572 [2024-04-26 13:28:44.879743] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.572 [2024-04-26 13:28:44.879956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.572 [2024-04-26 13:28:44.880113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.572 [2024-04-26 13:28:44.880740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:27.572 [2024-04-26 13:28:44.880773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.508 13:28:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.508 13:28:45 -- common/autotest_common.sh@850 -- # return 0 00:18:28.508 13:28:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:28.508 13:28:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:28.508 13:28:45 -- common/autotest_common.sh@10 -- # set +x 00:18:28.508 13:28:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.508 13:28:45 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:28.508 13:28:45 -- target/multitarget.sh@21 -- # jq length 00:18:28.508 13:28:45 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:28.508 13:28:45 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:28.508 13:28:45 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:28.508 "nvmf_tgt_1" 00:18:28.508 13:28:45 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:28.766 "nvmf_tgt_2" 00:18:28.766 13:28:46 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:28.766 13:28:46 -- target/multitarget.sh@28 -- # jq length 00:18:28.766 13:28:46 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:28.766 13:28:46 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:29.024 true 00:18:29.024 13:28:46 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:29.024 true 00:18:29.024 13:28:46 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:29.025 13:28:46 -- target/multitarget.sh@35 -- # jq length 00:18:29.332 13:28:46 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:29.332 13:28:46 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:29.332 13:28:46 -- target/multitarget.sh@41 -- # nvmftestfini 00:18:29.332 13:28:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:29.332 13:28:46 -- nvmf/common.sh@117 -- # sync 00:18:29.332 13:28:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.332 13:28:46 -- nvmf/common.sh@120 -- # set +e 00:18:29.332 13:28:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.332 13:28:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.332 rmmod nvme_tcp 00:18:29.332 rmmod nvme_fabrics 00:18:29.332 rmmod nvme_keyring 00:18:29.332 13:28:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.332 13:28:46 -- nvmf/common.sh@124 -- # set -e 00:18:29.332 13:28:46 -- nvmf/common.sh@125 -- # return 0 00:18:29.332 13:28:46 -- nvmf/common.sh@478 -- # '[' -n 67104 ']' 00:18:29.332 13:28:46 -- nvmf/common.sh@479 -- # killprocess 67104 00:18:29.332 13:28:46 -- common/autotest_common.sh@936 -- # '[' -z 67104 ']' 00:18:29.332 13:28:46 -- common/autotest_common.sh@940 -- # kill -0 67104 00:18:29.332 13:28:46 -- common/autotest_common.sh@941 -- # uname 00:18:29.332 13:28:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.332 13:28:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67104 00:18:29.332 13:28:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:29.332 13:28:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:29.332 killing process with pid 67104 00:18:29.332 13:28:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67104' 00:18:29.332 13:28:46 -- common/autotest_common.sh@955 -- # kill 67104 00:18:29.332 13:28:46 -- common/autotest_common.sh@960 -- # wait 67104 00:18:29.592 13:28:46 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:29.592 13:28:46 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:29.592 13:28:46 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:29.592 13:28:46 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.592 13:28:46 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.592 13:28:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.592 13:28:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.592 13:28:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.592 13:28:47 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:29.592 00:18:29.592 real 0m3.004s 00:18:29.592 user 0m9.584s 00:18:29.592 sys 0m0.741s 00:18:29.592 13:28:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:29.592 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:29.592 ************************************ 00:18:29.592 END TEST nvmf_multitarget 00:18:29.592 ************************************ 00:18:29.851 13:28:47 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:29.851 13:28:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:29.851 13:28:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.851 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:29.851 ************************************ 00:18:29.851 START TEST nvmf_rpc 00:18:29.851 ************************************ 00:18:29.851 13:28:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:29.851 * Looking for test storage... 00:18:29.851 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:29.851 13:28:47 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:29.851 13:28:47 -- nvmf/common.sh@7 -- # uname -s 00:18:29.851 13:28:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.851 13:28:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.851 13:28:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.851 13:28:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.851 13:28:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.852 13:28:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.852 13:28:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.852 13:28:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.852 13:28:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.852 13:28:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.852 13:28:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:29.852 13:28:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:29.852 13:28:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.852 13:28:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.852 13:28:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:29.852 13:28:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:29.852 13:28:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:29.852 13:28:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.852 13:28:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.852 13:28:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.852 13:28:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.852 13:28:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.852 13:28:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.852 13:28:47 -- paths/export.sh@5 -- # export PATH 00:18:29.852 13:28:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.852 13:28:47 -- nvmf/common.sh@47 -- # : 0 00:18:29.852 13:28:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:29.852 13:28:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:29.852 13:28:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:29.852 13:28:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.852 13:28:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.852 13:28:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:29.852 13:28:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:29.852 13:28:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:29.852 13:28:47 -- target/rpc.sh@11 -- # loops=5 00:18:29.852 13:28:47 -- target/rpc.sh@23 -- # nvmftestinit 00:18:29.852 13:28:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:29.852 13:28:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.852 13:28:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:29.852 13:28:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:29.852 13:28:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:29.852 13:28:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.852 13:28:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.852 13:28:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.852 13:28:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:29.852 13:28:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:29.852 13:28:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:29.852 13:28:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:29.852 13:28:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:29.852 13:28:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:29.852 13:28:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.852 13:28:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.852 13:28:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:29.852 13:28:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:29.852 13:28:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:29.852 13:28:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:29.852 13:28:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:29.852 13:28:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.852 13:28:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:29.852 13:28:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:29.852 13:28:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:29.852 13:28:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:29.852 13:28:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:29.852 13:28:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:29.852 Cannot find device "nvmf_tgt_br" 00:18:29.852 13:28:47 -- nvmf/common.sh@155 -- # true 00:18:29.852 13:28:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.112 Cannot find device "nvmf_tgt_br2" 00:18:30.112 13:28:47 -- nvmf/common.sh@156 -- # true 00:18:30.112 13:28:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:30.112 13:28:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:30.112 Cannot find device "nvmf_tgt_br" 00:18:30.112 13:28:47 -- nvmf/common.sh@158 -- # true 00:18:30.112 13:28:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:30.112 Cannot find device "nvmf_tgt_br2" 00:18:30.112 13:28:47 -- nvmf/common.sh@159 -- # true 00:18:30.112 13:28:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:30.112 13:28:47 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:30.112 13:28:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.112 13:28:47 -- nvmf/common.sh@162 -- # true 00:18:30.112 13:28:47 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.112 13:28:47 -- nvmf/common.sh@163 -- # true 00:18:30.112 13:28:47 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:30.112 13:28:47 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:30.112 13:28:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:30.112 13:28:47 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:30.112 13:28:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:30.112 13:28:47 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:30.112 13:28:47 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:30.112 13:28:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:30.112 13:28:47 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:30.112 13:28:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:30.112 13:28:47 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:30.112 13:28:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:30.112 13:28:47 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:30.112 13:28:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:30.112 13:28:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:30.112 13:28:47 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:30.112 13:28:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:30.112 13:28:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:30.112 13:28:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:30.371 13:28:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:30.371 13:28:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:30.371 13:28:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:30.371 13:28:47 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:30.371 13:28:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:30.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:18:30.371 00:18:30.371 --- 10.0.0.2 ping statistics --- 00:18:30.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.371 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:30.371 13:28:47 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:30.371 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:30.371 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:30.371 00:18:30.371 --- 10.0.0.3 ping statistics --- 00:18:30.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.371 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:30.371 13:28:47 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:30.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:30.371 00:18:30.371 --- 10.0.0.1 ping statistics --- 00:18:30.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.371 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:30.371 13:28:47 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.371 13:28:47 -- nvmf/common.sh@422 -- # return 0 00:18:30.371 13:28:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:30.371 13:28:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.371 13:28:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:30.371 13:28:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:30.371 13:28:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.371 13:28:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:30.371 13:28:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:30.371 13:28:47 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:30.371 13:28:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:30.371 13:28:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:30.371 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:30.371 13:28:47 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:30.371 13:28:47 -- nvmf/common.sh@470 -- # nvmfpid=67340 00:18:30.371 13:28:47 -- nvmf/common.sh@471 -- # waitforlisten 67340 00:18:30.371 13:28:47 -- common/autotest_common.sh@817 -- # '[' -z 67340 ']' 00:18:30.371 13:28:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.371 13:28:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:30.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.371 13:28:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.371 13:28:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:30.371 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:18:30.371 [2024-04-26 13:28:47.679232] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:30.371 [2024-04-26 13:28:47.679331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.371 [2024-04-26 13:28:47.816619] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:30.630 [2024-04-26 13:28:47.940550] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.630 [2024-04-26 13:28:47.940618] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.630 [2024-04-26 13:28:47.940631] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.630 [2024-04-26 13:28:47.940640] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.630 [2024-04-26 13:28:47.940648] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.630 [2024-04-26 13:28:47.940848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.630 [2024-04-26 13:28:47.941115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.630 [2024-04-26 13:28:47.941642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.630 [2024-04-26 13:28:47.941676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.567 13:28:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:31.567 13:28:48 -- common/autotest_common.sh@850 -- # return 0 00:18:31.567 13:28:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:31.567 13:28:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:31.567 13:28:48 -- common/autotest_common.sh@10 -- # set +x 00:18:31.567 13:28:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.567 13:28:48 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:31.567 13:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.567 13:28:48 -- common/autotest_common.sh@10 -- # set +x 00:18:31.567 13:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.567 13:28:48 -- target/rpc.sh@26 -- # stats='{ 00:18:31.567 "poll_groups": [ 00:18:31.567 { 00:18:31.567 "admin_qpairs": 0, 00:18:31.567 "completed_nvme_io": 0, 00:18:31.567 "current_admin_qpairs": 0, 00:18:31.567 "current_io_qpairs": 0, 00:18:31.567 "io_qpairs": 0, 00:18:31.567 "name": "nvmf_tgt_poll_group_0", 00:18:31.567 "pending_bdev_io": 0, 00:18:31.567 "transports": [] 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "admin_qpairs": 0, 00:18:31.567 "completed_nvme_io": 0, 00:18:31.567 "current_admin_qpairs": 0, 00:18:31.567 "current_io_qpairs": 0, 00:18:31.567 "io_qpairs": 0, 00:18:31.567 "name": "nvmf_tgt_poll_group_1", 00:18:31.567 "pending_bdev_io": 0, 00:18:31.567 "transports": [] 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "admin_qpairs": 0, 00:18:31.567 "completed_nvme_io": 0, 00:18:31.567 "current_admin_qpairs": 0, 00:18:31.567 "current_io_qpairs": 0, 00:18:31.567 "io_qpairs": 0, 00:18:31.567 "name": "nvmf_tgt_poll_group_2", 00:18:31.567 "pending_bdev_io": 0, 00:18:31.567 "transports": [] 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "admin_qpairs": 0, 00:18:31.567 "completed_nvme_io": 0, 00:18:31.567 "current_admin_qpairs": 0, 00:18:31.567 "current_io_qpairs": 0, 00:18:31.567 "io_qpairs": 0, 00:18:31.567 "name": "nvmf_tgt_poll_group_3", 00:18:31.567 "pending_bdev_io": 0, 00:18:31.567 "transports": [] 00:18:31.567 } 00:18:31.567 ], 00:18:31.567 "tick_rate": 2200000000 00:18:31.567 }' 00:18:31.567 13:28:48 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:31.567 13:28:48 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:31.567 13:28:48 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:31.567 13:28:48 -- target/rpc.sh@15 -- # wc -l 00:18:31.567 13:28:48 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:31.567 13:28:48 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:31.567 13:28:48 -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:31.567 13:28:48 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.567 13:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.567 13:28:48 -- common/autotest_common.sh@10 -- # set +x 00:18:31.567 [2024-04-26 13:28:48.847290] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.567 13:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.567 13:28:48 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:31.567 13:28:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.567 13:28:48 -- common/autotest_common.sh@10 -- # set +x 00:18:31.567 13:28:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.567 13:28:48 -- target/rpc.sh@33 -- # stats='{ 00:18:31.567 "poll_groups": [ 00:18:31.567 { 00:18:31.567 "admin_qpairs": 0, 00:18:31.567 "completed_nvme_io": 0, 00:18:31.567 "current_admin_qpairs": 0, 00:18:31.567 "current_io_qpairs": 0, 00:18:31.567 "io_qpairs": 0, 00:18:31.567 "name": "nvmf_tgt_poll_group_0", 00:18:31.567 "pending_bdev_io": 0, 00:18:31.567 "transports": [ 00:18:31.567 { 00:18:31.567 "trtype": "TCP" 00:18:31.567 } 00:18:31.567 ] 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "admin_qpairs": 0, 00:18:31.567 "completed_nvme_io": 0, 00:18:31.567 "current_admin_qpairs": 0, 00:18:31.567 "current_io_qpairs": 0, 00:18:31.567 "io_qpairs": 0, 00:18:31.567 "name": "nvmf_tgt_poll_group_1", 00:18:31.567 "pending_bdev_io": 0, 00:18:31.567 "transports": [ 00:18:31.567 { 00:18:31.567 "trtype": "TCP" 00:18:31.567 } 00:18:31.567 ] 00:18:31.567 }, 00:18:31.567 { 00:18:31.567 "admin_qpairs": 0, 00:18:31.567 "completed_nvme_io": 0, 00:18:31.567 "current_admin_qpairs": 0, 00:18:31.567 "current_io_qpairs": 0, 00:18:31.567 "io_qpairs": 0, 00:18:31.567 "name": "nvmf_tgt_poll_group_2", 00:18:31.567 "pending_bdev_io": 0, 00:18:31.567 "transports": [ 00:18:31.567 { 00:18:31.567 "trtype": "TCP" 00:18:31.567 } 00:18:31.568 ] 00:18:31.568 }, 00:18:31.568 { 00:18:31.568 "admin_qpairs": 0, 00:18:31.568 "completed_nvme_io": 0, 00:18:31.568 "current_admin_qpairs": 0, 00:18:31.568 "current_io_qpairs": 0, 00:18:31.568 "io_qpairs": 0, 00:18:31.568 "name": "nvmf_tgt_poll_group_3", 00:18:31.568 "pending_bdev_io": 0, 00:18:31.568 "transports": [ 00:18:31.568 { 00:18:31.568 "trtype": "TCP" 00:18:31.568 } 00:18:31.568 ] 00:18:31.568 } 00:18:31.568 ], 00:18:31.568 "tick_rate": 2200000000 00:18:31.568 }' 00:18:31.568 13:28:48 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:31.568 13:28:48 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:31.568 13:28:48 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:31.568 13:28:48 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:31.568 13:28:48 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:31.568 13:28:48 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:31.568 13:28:48 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:31.568 13:28:48 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:31.568 13:28:48 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:31.828 13:28:49 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:31.828 13:28:49 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:31.828 13:28:49 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:31.828 13:28:49 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:31.828 13:28:49 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:31.828 13:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.828 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.828 Malloc1 00:18:31.828 13:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.828 13:28:49 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:31.828 13:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.828 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.828 13:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.828 13:28:49 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:31.828 13:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.828 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.828 13:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.828 13:28:49 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:31.828 13:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.828 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.828 13:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.828 13:28:49 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.828 13:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.828 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.828 [2024-04-26 13:28:49.087000] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.828 13:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.828 13:28:49 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -a 10.0.0.2 -s 4420 00:18:31.828 13:28:49 -- common/autotest_common.sh@638 -- # local es=0 00:18:31.828 13:28:49 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -a 10.0.0.2 -s 4420 00:18:31.828 13:28:49 -- common/autotest_common.sh@626 -- # local arg=nvme 00:18:31.828 13:28:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.828 13:28:49 -- common/autotest_common.sh@630 -- # type -t nvme 00:18:31.828 13:28:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.828 13:28:49 -- common/autotest_common.sh@632 -- # type -P nvme 00:18:31.828 13:28:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:31.828 13:28:49 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:18:31.828 13:28:49 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:18:31.828 13:28:49 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -a 10.0.0.2 -s 4420 00:18:31.828 [2024-04-26 13:28:49.115276] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7' 00:18:31.828 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:31.828 could not add new controller: failed to write to nvme-fabrics device 00:18:31.828 13:28:49 -- common/autotest_common.sh@641 -- # es=1 00:18:31.828 13:28:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:31.828 13:28:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:31.828 13:28:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:31.828 13:28:49 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:31.828 13:28:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:31.828 13:28:49 -- common/autotest_common.sh@10 -- # set +x 00:18:31.828 13:28:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:31.828 13:28:49 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:32.115 13:28:49 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:32.115 13:28:49 -- common/autotest_common.sh@1184 -- # local i=0 00:18:32.115 13:28:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:32.115 13:28:49 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:32.115 13:28:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:34.017 13:28:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:34.017 13:28:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:34.017 13:28:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:34.017 13:28:51 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:34.017 13:28:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.017 13:28:51 -- common/autotest_common.sh@1194 -- # return 0 00:18:34.017 13:28:51 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:34.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:34.017 13:28:51 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:34.017 13:28:51 -- common/autotest_common.sh@1205 -- # local i=0 00:18:34.017 13:28:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.017 13:28:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:34.017 13:28:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:34.017 13:28:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:34.017 13:28:51 -- common/autotest_common.sh@1217 -- # return 0 00:18:34.017 13:28:51 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:34.017 13:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.018 13:28:51 -- common/autotest_common.sh@10 -- # set +x 00:18:34.018 13:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.018 13:28:51 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.018 13:28:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:34.018 13:28:51 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.018 13:28:51 -- common/autotest_common.sh@626 -- # local arg=nvme 00:18:34.018 13:28:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:34.018 13:28:51 -- common/autotest_common.sh@630 -- # type -t nvme 00:18:34.018 13:28:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:34.018 13:28:51 -- common/autotest_common.sh@632 -- # type -P nvme 00:18:34.018 13:28:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:34.018 13:28:51 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:18:34.018 13:28:51 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:18:34.018 13:28:51 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.018 [2024-04-26 13:28:51.397018] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7' 00:18:34.018 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:34.018 could not add new controller: failed to write to nvme-fabrics device 00:18:34.018 13:28:51 -- common/autotest_common.sh@641 -- # es=1 00:18:34.018 13:28:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:34.018 13:28:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:34.018 13:28:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:34.018 13:28:51 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:34.018 13:28:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:34.018 13:28:51 -- common/autotest_common.sh@10 -- # set +x 00:18:34.018 13:28:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:34.018 13:28:51 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:34.276 13:28:51 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:34.276 13:28:51 -- common/autotest_common.sh@1184 -- # local i=0 00:18:34.276 13:28:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.276 13:28:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:34.276 13:28:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:36.181 13:28:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:36.181 13:28:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:36.181 13:28:53 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:36.181 13:28:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:36.181 13:28:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.181 13:28:53 -- common/autotest_common.sh@1194 -- # return 0 00:18:36.181 13:28:53 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:36.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:36.439 13:28:53 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:36.439 13:28:53 -- common/autotest_common.sh@1205 -- # local i=0 00:18:36.439 13:28:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:36.439 13:28:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.439 13:28:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:36.439 13:28:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:36.439 13:28:53 -- common/autotest_common.sh@1217 -- # return 0 00:18:36.439 13:28:53 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.439 13:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.439 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:18:36.439 13:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.439 13:28:53 -- target/rpc.sh@81 -- # seq 1 5 00:18:36.439 13:28:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:36.439 13:28:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:36.439 13:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.440 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:18:36.440 13:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.440 13:28:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.440 13:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.440 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:18:36.440 [2024-04-26 13:28:53.694251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.440 13:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.440 13:28:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:36.440 13:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.440 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:18:36.440 13:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.440 13:28:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:36.440 13:28:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:36.440 13:28:53 -- common/autotest_common.sh@10 -- # set +x 00:18:36.440 13:28:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:36.440 13:28:53 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:36.440 13:28:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:36.440 13:28:53 -- common/autotest_common.sh@1184 -- # local i=0 00:18:36.440 13:28:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.440 13:28:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:36.440 13:28:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:38.976 13:28:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:38.976 13:28:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:38.976 13:28:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:38.976 13:28:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:38.976 13:28:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:38.976 13:28:55 -- common/autotest_common.sh@1194 -- # return 0 00:18:38.976 13:28:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:38.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:38.976 13:28:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:38.976 13:28:55 -- common/autotest_common.sh@1205 -- # local i=0 00:18:38.976 13:28:55 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:38.976 13:28:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:38.976 13:28:55 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:38.976 13:28:55 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:38.976 13:28:55 -- common/autotest_common.sh@1217 -- # return 0 00:18:38.976 13:28:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:38.976 13:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.976 13:28:55 -- common/autotest_common.sh@10 -- # set +x 00:18:38.976 13:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.976 13:28:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.976 13:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.976 13:28:55 -- common/autotest_common.sh@10 -- # set +x 00:18:38.976 13:28:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.976 13:28:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:38.976 13:28:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:38.976 13:28:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.977 13:28:55 -- common/autotest_common.sh@10 -- # set +x 00:18:38.977 13:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.977 13:28:56 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.977 13:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.977 13:28:56 -- common/autotest_common.sh@10 -- # set +x 00:18:38.977 [2024-04-26 13:28:56.005373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.977 13:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.977 13:28:56 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:38.977 13:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.977 13:28:56 -- common/autotest_common.sh@10 -- # set +x 00:18:38.977 13:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.977 13:28:56 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:38.977 13:28:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:38.977 13:28:56 -- common/autotest_common.sh@10 -- # set +x 00:18:38.977 13:28:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:38.977 13:28:56 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:38.977 13:28:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:38.977 13:28:56 -- common/autotest_common.sh@1184 -- # local i=0 00:18:38.977 13:28:56 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.977 13:28:56 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:38.977 13:28:56 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:40.879 13:28:58 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:40.879 13:28:58 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:40.879 13:28:58 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:40.879 13:28:58 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:40.879 13:28:58 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.879 13:28:58 -- common/autotest_common.sh@1194 -- # return 0 00:18:40.879 13:28:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:41.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:41.136 13:28:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:41.136 13:28:58 -- common/autotest_common.sh@1205 -- # local i=0 00:18:41.136 13:28:58 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:41.136 13:28:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.136 13:28:58 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:41.136 13:28:58 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:41.136 13:28:58 -- common/autotest_common.sh@1217 -- # return 0 00:18:41.136 13:28:58 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:41.136 13:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.136 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:41.136 13:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.136 13:28:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.136 13:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.136 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:41.136 13:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.136 13:28:58 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:41.136 13:28:58 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:41.136 13:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.136 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:41.136 13:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.136 13:28:58 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:41.136 13:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.136 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:41.136 [2024-04-26 13:28:58.420824] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.136 13:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.136 13:28:58 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:41.136 13:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.136 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:41.136 13:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.136 13:28:58 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:41.136 13:28:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:41.136 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:18:41.136 13:28:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:41.136 13:28:58 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.405 13:28:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:41.405 13:28:58 -- common/autotest_common.sh@1184 -- # local i=0 00:18:41.405 13:28:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.405 13:28:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:41.405 13:28:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:43.305 13:29:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:43.305 13:29:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:43.305 13:29:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.305 13:29:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:43.305 13:29:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.305 13:29:00 -- common/autotest_common.sh@1194 -- # return 0 00:18:43.305 13:29:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:43.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:43.305 13:29:00 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:43.305 13:29:00 -- common/autotest_common.sh@1205 -- # local i=0 00:18:43.305 13:29:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:43.305 13:29:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.305 13:29:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:43.305 13:29:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:43.305 13:29:00 -- common/autotest_common.sh@1217 -- # return 0 00:18:43.305 13:29:00 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:43.305 13:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.305 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:43.305 13:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.305 13:29:00 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.305 13:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.305 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:43.305 13:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.305 13:29:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:43.305 13:29:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:43.305 13:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.305 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:43.305 13:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.305 13:29:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.305 13:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.305 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:43.305 [2024-04-26 13:29:00.728151] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.305 13:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.305 13:29:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:43.305 13:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.305 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:43.305 13:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.305 13:29:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:43.305 13:29:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:43.305 13:29:00 -- common/autotest_common.sh@10 -- # set +x 00:18:43.305 13:29:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:43.305 13:29:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:43.562 13:29:00 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:43.562 13:29:00 -- common/autotest_common.sh@1184 -- # local i=0 00:18:43.562 13:29:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.562 13:29:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:43.562 13:29:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:45.506 13:29:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:45.506 13:29:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:45.506 13:29:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.506 13:29:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:45.506 13:29:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.506 13:29:02 -- common/autotest_common.sh@1194 -- # return 0 00:18:45.506 13:29:02 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:45.765 13:29:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:45.765 13:29:02 -- common/autotest_common.sh@1205 -- # local i=0 00:18:45.765 13:29:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:45.765 13:29:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.765 13:29:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:45.765 13:29:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.765 13:29:03 -- common/autotest_common.sh@1217 -- # return 0 00:18:45.765 13:29:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:45.765 13:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.765 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:45.765 13:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.765 13:29:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:45.765 13:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.765 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:45.765 13:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.765 13:29:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:45.765 13:29:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:45.765 13:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.765 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:45.765 13:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.765 13:29:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.765 13:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.765 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:45.765 [2024-04-26 13:29:03.031938] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.765 13:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.765 13:29:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:45.765 13:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.765 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:45.765 13:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.765 13:29:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:45.765 13:29:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:45.765 13:29:03 -- common/autotest_common.sh@10 -- # set +x 00:18:45.765 13:29:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:45.765 13:29:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:46.030 13:29:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:46.030 13:29:03 -- common/autotest_common.sh@1184 -- # local i=0 00:18:46.030 13:29:03 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.030 13:29:03 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:18:46.030 13:29:03 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:47.932 13:29:05 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:47.932 13:29:05 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:47.932 13:29:05 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:47.932 13:29:05 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:47.932 13:29:05 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:47.932 13:29:05 -- common/autotest_common.sh@1194 -- # return 0 00:18:47.932 13:29:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:47.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.932 13:29:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:47.932 13:29:05 -- common/autotest_common.sh@1205 -- # local i=0 00:18:47.932 13:29:05 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:47.932 13:29:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:47.932 13:29:05 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:47.932 13:29:05 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:47.932 13:29:05 -- common/autotest_common.sh@1217 -- # return 0 00:18:47.932 13:29:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:47.932 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.932 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.932 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@99 -- # seq 1 5 00:18:47.933 13:29:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:47.933 13:29:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 [2024-04-26 13:29:05.331052] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:47.933 13:29:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.933 13:29:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:47.933 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.933 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 [2024-04-26 13:29:05.379025] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:48.192 13:29:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 [2024-04-26 13:29:05.427036] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:48.192 13:29:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 [2024-04-26 13:29:05.475138] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:48.192 13:29:05 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.192 13:29:05 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.192 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.192 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.192 [2024-04-26 13:29:05.523198] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.193 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.193 13:29:05 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.193 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.193 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.193 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.193 13:29:05 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:48.193 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.193 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.193 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.193 13:29:05 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:48.193 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.193 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.193 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.193 13:29:05 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.193 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.193 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.193 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.193 13:29:05 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:48.193 13:29:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:48.193 13:29:05 -- common/autotest_common.sh@10 -- # set +x 00:18:48.193 13:29:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:48.193 13:29:05 -- target/rpc.sh@110 -- # stats='{ 00:18:48.193 "poll_groups": [ 00:18:48.193 { 00:18:48.193 "admin_qpairs": 2, 00:18:48.193 "completed_nvme_io": 66, 00:18:48.193 "current_admin_qpairs": 0, 00:18:48.193 "current_io_qpairs": 0, 00:18:48.193 "io_qpairs": 16, 00:18:48.193 "name": "nvmf_tgt_poll_group_0", 00:18:48.193 "pending_bdev_io": 0, 00:18:48.193 "transports": [ 00:18:48.193 { 00:18:48.193 "trtype": "TCP" 00:18:48.193 } 00:18:48.193 ] 00:18:48.193 }, 00:18:48.193 { 00:18:48.193 "admin_qpairs": 3, 00:18:48.193 "completed_nvme_io": 67, 00:18:48.193 "current_admin_qpairs": 0, 00:18:48.193 "current_io_qpairs": 0, 00:18:48.193 "io_qpairs": 17, 00:18:48.193 "name": "nvmf_tgt_poll_group_1", 00:18:48.193 "pending_bdev_io": 0, 00:18:48.193 "transports": [ 00:18:48.193 { 00:18:48.193 "trtype": "TCP" 00:18:48.193 } 00:18:48.193 ] 00:18:48.193 }, 00:18:48.193 { 00:18:48.193 "admin_qpairs": 1, 00:18:48.193 "completed_nvme_io": 120, 00:18:48.193 "current_admin_qpairs": 0, 00:18:48.193 "current_io_qpairs": 0, 00:18:48.193 "io_qpairs": 19, 00:18:48.193 "name": "nvmf_tgt_poll_group_2", 00:18:48.193 "pending_bdev_io": 0, 00:18:48.193 "transports": [ 00:18:48.193 { 00:18:48.193 "trtype": "TCP" 00:18:48.193 } 00:18:48.193 ] 00:18:48.193 }, 00:18:48.193 { 00:18:48.193 "admin_qpairs": 1, 00:18:48.193 "completed_nvme_io": 167, 00:18:48.193 "current_admin_qpairs": 0, 00:18:48.193 "current_io_qpairs": 0, 00:18:48.193 "io_qpairs": 18, 00:18:48.193 "name": "nvmf_tgt_poll_group_3", 00:18:48.193 "pending_bdev_io": 0, 00:18:48.193 "transports": [ 00:18:48.193 { 00:18:48.193 "trtype": "TCP" 00:18:48.193 } 00:18:48.193 ] 00:18:48.193 } 00:18:48.193 ], 00:18:48.193 "tick_rate": 2200000000 00:18:48.193 }' 00:18:48.193 13:29:05 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:48.193 13:29:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:48.193 13:29:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:48.193 13:29:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:48.193 13:29:05 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:48.452 13:29:05 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:48.452 13:29:05 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:48.452 13:29:05 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:48.452 13:29:05 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:48.452 13:29:05 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:18:48.452 13:29:05 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:48.452 13:29:05 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:48.452 13:29:05 -- target/rpc.sh@123 -- # nvmftestfini 00:18:48.452 13:29:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:48.452 13:29:05 -- nvmf/common.sh@117 -- # sync 00:18:48.452 13:29:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.452 13:29:05 -- nvmf/common.sh@120 -- # set +e 00:18:48.452 13:29:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.452 13:29:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.452 rmmod nvme_tcp 00:18:48.452 rmmod nvme_fabrics 00:18:48.452 rmmod nvme_keyring 00:18:48.452 13:29:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.452 13:29:05 -- nvmf/common.sh@124 -- # set -e 00:18:48.452 13:29:05 -- nvmf/common.sh@125 -- # return 0 00:18:48.452 13:29:05 -- nvmf/common.sh@478 -- # '[' -n 67340 ']' 00:18:48.452 13:29:05 -- nvmf/common.sh@479 -- # killprocess 67340 00:18:48.452 13:29:05 -- common/autotest_common.sh@936 -- # '[' -z 67340 ']' 00:18:48.452 13:29:05 -- common/autotest_common.sh@940 -- # kill -0 67340 00:18:48.452 13:29:05 -- common/autotest_common.sh@941 -- # uname 00:18:48.452 13:29:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:48.452 13:29:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67340 00:18:48.452 killing process with pid 67340 00:18:48.452 13:29:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:48.452 13:29:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:48.452 13:29:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67340' 00:18:48.452 13:29:05 -- common/autotest_common.sh@955 -- # kill 67340 00:18:48.452 13:29:05 -- common/autotest_common.sh@960 -- # wait 67340 00:18:48.710 13:29:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:48.710 13:29:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:48.710 13:29:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:48.710 13:29:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.710 13:29:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.710 13:29:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.710 13:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.710 13:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.710 13:29:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:48.710 ************************************ 00:18:48.710 END TEST nvmf_rpc 00:18:48.710 ************************************ 00:18:48.710 00:18:48.710 real 0m18.945s 00:18:48.710 user 1m10.911s 00:18:48.710 sys 0m2.746s 00:18:48.710 13:29:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.710 13:29:06 -- common/autotest_common.sh@10 -- # set +x 00:18:48.710 13:29:06 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:48.710 13:29:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:48.710 13:29:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.710 13:29:06 -- common/autotest_common.sh@10 -- # set +x 00:18:48.969 ************************************ 00:18:48.969 START TEST nvmf_invalid 00:18:48.969 ************************************ 00:18:48.969 13:29:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:48.969 * Looking for test storage... 00:18:48.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:48.969 13:29:06 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.969 13:29:06 -- nvmf/common.sh@7 -- # uname -s 00:18:48.969 13:29:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.969 13:29:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.969 13:29:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.969 13:29:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.969 13:29:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.969 13:29:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.969 13:29:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.969 13:29:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.969 13:29:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.969 13:29:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.969 13:29:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:48.969 13:29:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:48.969 13:29:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.970 13:29:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.970 13:29:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.970 13:29:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.970 13:29:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.970 13:29:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.970 13:29:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.970 13:29:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.970 13:29:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.970 13:29:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.970 13:29:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.970 13:29:06 -- paths/export.sh@5 -- # export PATH 00:18:48.970 13:29:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.970 13:29:06 -- nvmf/common.sh@47 -- # : 0 00:18:48.970 13:29:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.970 13:29:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.970 13:29:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.970 13:29:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.970 13:29:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.970 13:29:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.970 13:29:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.970 13:29:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.970 13:29:06 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:48.970 13:29:06 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.970 13:29:06 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:48.970 13:29:06 -- target/invalid.sh@14 -- # target=foobar 00:18:48.970 13:29:06 -- target/invalid.sh@16 -- # RANDOM=0 00:18:48.970 13:29:06 -- target/invalid.sh@34 -- # nvmftestinit 00:18:48.970 13:29:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:48.970 13:29:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.970 13:29:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:48.970 13:29:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:48.970 13:29:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:48.970 13:29:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.970 13:29:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.970 13:29:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.970 13:29:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:48.970 13:29:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:48.970 13:29:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:48.970 13:29:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:48.970 13:29:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:48.970 13:29:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:48.970 13:29:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.970 13:29:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.970 13:29:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:48.970 13:29:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:48.970 13:29:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.970 13:29:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.970 13:29:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.970 13:29:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.970 13:29:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.970 13:29:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.970 13:29:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.970 13:29:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.970 13:29:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:48.970 13:29:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:48.970 Cannot find device "nvmf_tgt_br" 00:18:48.970 13:29:06 -- nvmf/common.sh@155 -- # true 00:18:48.970 13:29:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.970 Cannot find device "nvmf_tgt_br2" 00:18:48.970 13:29:06 -- nvmf/common.sh@156 -- # true 00:18:48.970 13:29:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:48.970 13:29:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:48.970 Cannot find device "nvmf_tgt_br" 00:18:48.970 13:29:06 -- nvmf/common.sh@158 -- # true 00:18:48.970 13:29:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:48.970 Cannot find device "nvmf_tgt_br2" 00:18:48.970 13:29:06 -- nvmf/common.sh@159 -- # true 00:18:48.970 13:29:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:49.228 13:29:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:49.228 13:29:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.228 13:29:06 -- nvmf/common.sh@162 -- # true 00:18:49.228 13:29:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.228 13:29:06 -- nvmf/common.sh@163 -- # true 00:18:49.228 13:29:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.228 13:29:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.228 13:29:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.228 13:29:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.228 13:29:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.228 13:29:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.228 13:29:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.228 13:29:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:49.228 13:29:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:49.228 13:29:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:49.228 13:29:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:49.228 13:29:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:49.228 13:29:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:49.229 13:29:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.229 13:29:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.229 13:29:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.229 13:29:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:49.229 13:29:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:49.229 13:29:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.229 13:29:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.229 13:29:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.229 13:29:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.229 13:29:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.229 13:29:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:49.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:49.229 00:18:49.229 --- 10.0.0.2 ping statistics --- 00:18:49.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.229 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:49.229 13:29:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:49.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:49.229 00:18:49.229 --- 10.0.0.3 ping statistics --- 00:18:49.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.229 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:49.229 13:29:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:49.229 00:18:49.229 --- 10.0.0.1 ping statistics --- 00:18:49.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.229 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:49.229 13:29:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.229 13:29:06 -- nvmf/common.sh@422 -- # return 0 00:18:49.229 13:29:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:49.229 13:29:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.229 13:29:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:49.229 13:29:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:49.229 13:29:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.229 13:29:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:49.229 13:29:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:49.488 13:29:06 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:49.488 13:29:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:49.488 13:29:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:49.488 13:29:06 -- common/autotest_common.sh@10 -- # set +x 00:18:49.488 13:29:06 -- nvmf/common.sh@470 -- # nvmfpid=67863 00:18:49.488 13:29:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.488 13:29:06 -- nvmf/common.sh@471 -- # waitforlisten 67863 00:18:49.488 13:29:06 -- common/autotest_common.sh@817 -- # '[' -z 67863 ']' 00:18:49.488 13:29:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.488 13:29:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:49.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.488 13:29:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.488 13:29:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:49.488 13:29:06 -- common/autotest_common.sh@10 -- # set +x 00:18:49.488 [2024-04-26 13:29:06.743855] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:49.488 [2024-04-26 13:29:06.743959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.488 [2024-04-26 13:29:06.886877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.746 [2024-04-26 13:29:07.008914] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.746 [2024-04-26 13:29:07.009024] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.746 [2024-04-26 13:29:07.009043] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.746 [2024-04-26 13:29:07.009057] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.746 [2024-04-26 13:29:07.009084] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.746 [2024-04-26 13:29:07.009237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.746 [2024-04-26 13:29:07.009739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.746 [2024-04-26 13:29:07.010389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.746 [2024-04-26 13:29:07.010399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.681 13:29:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:50.681 13:29:07 -- common/autotest_common.sh@850 -- # return 0 00:18:50.681 13:29:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:50.681 13:29:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:50.681 13:29:07 -- common/autotest_common.sh@10 -- # set +x 00:18:50.681 13:29:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.681 13:29:07 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:50.681 13:29:07 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7668 00:18:50.681 [2024-04-26 13:29:08.066038] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:50.681 13:29:08 -- target/invalid.sh@40 -- # out='2024/04/26 13:29:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7668 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:50.681 request: 00:18:50.681 { 00:18:50.681 "method": "nvmf_create_subsystem", 00:18:50.681 "params": { 00:18:50.681 "nqn": "nqn.2016-06.io.spdk:cnode7668", 00:18:50.681 "tgt_name": "foobar" 00:18:50.681 } 00:18:50.681 } 00:18:50.681 Got JSON-RPC error response 00:18:50.681 GoRPCClient: error on JSON-RPC call' 00:18:50.681 13:29:08 -- target/invalid.sh@41 -- # [[ 2024/04/26 13:29:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7668 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:50.681 request: 00:18:50.681 { 00:18:50.681 "method": "nvmf_create_subsystem", 00:18:50.681 "params": { 00:18:50.681 "nqn": "nqn.2016-06.io.spdk:cnode7668", 00:18:50.681 "tgt_name": "foobar" 00:18:50.681 } 00:18:50.681 } 00:18:50.681 Got JSON-RPC error response 00:18:50.681 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:50.681 13:29:08 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:50.681 13:29:08 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2099 00:18:50.939 [2024-04-26 13:29:08.374366] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2099: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:51.198 13:29:08 -- target/invalid.sh@45 -- # out='2024/04/26 13:29:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2099 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:51.198 request: 00:18:51.198 { 00:18:51.198 "method": "nvmf_create_subsystem", 00:18:51.198 "params": { 00:18:51.198 "nqn": "nqn.2016-06.io.spdk:cnode2099", 00:18:51.198 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:51.198 } 00:18:51.198 } 00:18:51.198 Got JSON-RPC error response 00:18:51.198 GoRPCClient: error on JSON-RPC call' 00:18:51.198 13:29:08 -- target/invalid.sh@46 -- # [[ 2024/04/26 13:29:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode2099 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:51.198 request: 00:18:51.198 { 00:18:51.198 "method": "nvmf_create_subsystem", 00:18:51.198 "params": { 00:18:51.198 "nqn": "nqn.2016-06.io.spdk:cnode2099", 00:18:51.198 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:51.198 } 00:18:51.198 } 00:18:51.198 Got JSON-RPC error response 00:18:51.198 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:51.198 13:29:08 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:51.198 13:29:08 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode30241 00:18:51.457 [2024-04-26 13:29:08.662642] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30241: invalid model number 'SPDK_Controller' 00:18:51.457 13:29:08 -- target/invalid.sh@50 -- # out='2024/04/26 13:29:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30241], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:51.457 request: 00:18:51.457 { 00:18:51.457 "method": "nvmf_create_subsystem", 00:18:51.457 "params": { 00:18:51.457 "nqn": "nqn.2016-06.io.spdk:cnode30241", 00:18:51.457 "model_number": "SPDK_Controller\u001f" 00:18:51.457 } 00:18:51.457 } 00:18:51.457 Got JSON-RPC error response 00:18:51.457 GoRPCClient: error on JSON-RPC call' 00:18:51.457 13:29:08 -- target/invalid.sh@51 -- # [[ 2024/04/26 13:29:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode30241], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:51.457 request: 00:18:51.457 { 00:18:51.457 "method": "nvmf_create_subsystem", 00:18:51.457 "params": { 00:18:51.457 "nqn": "nqn.2016-06.io.spdk:cnode30241", 00:18:51.457 "model_number": "SPDK_Controller\u001f" 00:18:51.457 } 00:18:51.457 } 00:18:51.457 Got JSON-RPC error response 00:18:51.457 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:51.457 13:29:08 -- target/invalid.sh@54 -- # gen_random_s 21 00:18:51.457 13:29:08 -- target/invalid.sh@19 -- # local length=21 ll 00:18:51.457 13:29:08 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:51.457 13:29:08 -- target/invalid.sh@21 -- # local chars 00:18:51.457 13:29:08 -- target/invalid.sh@22 -- # local string 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 96 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+='`' 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 65 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=A 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 46 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=. 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 75 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=K 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 47 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=/ 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 39 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=\' 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 80 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=P 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 116 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=t 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 70 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=F 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 124 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+='|' 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 94 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+='^' 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 73 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=I 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 119 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=w 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 44 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=, 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 115 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=s 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 38 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+='&' 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 59 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=';' 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 86 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=V 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 72 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=H 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 77 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=M 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # printf %x 70 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # echo -e '\x46' 00:18:51.457 13:29:08 -- target/invalid.sh@25 -- # string+=F 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.457 13:29:08 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.457 13:29:08 -- target/invalid.sh@28 -- # [[ ` == \- ]] 00:18:51.457 13:29:08 -- target/invalid.sh@31 -- # echo '`A.K/'\''PtF|^Iw,s&;VHMF' 00:18:51.457 13:29:08 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '`A.K/'\''PtF|^Iw,s&;VHMF' nqn.2016-06.io.spdk:cnode20127 00:18:51.717 [2024-04-26 13:29:09.007013] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20127: invalid serial number '`A.K/'PtF|^Iw,s&;VHMF' 00:18:51.717 13:29:09 -- target/invalid.sh@54 -- # out='2024/04/26 13:29:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20127 serial_number:`A.K/'\''PtF|^Iw,s&;VHMF], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN `A.K/'\''PtF|^Iw,s&;VHMF 00:18:51.717 request: 00:18:51.717 { 00:18:51.717 "method": "nvmf_create_subsystem", 00:18:51.717 "params": { 00:18:51.717 "nqn": "nqn.2016-06.io.spdk:cnode20127", 00:18:51.717 "serial_number": "`A.K/'\''PtF|^Iw,s&;VHMF" 00:18:51.717 } 00:18:51.717 } 00:18:51.717 Got JSON-RPC error response 00:18:51.717 GoRPCClient: error on JSON-RPC call' 00:18:51.717 13:29:09 -- target/invalid.sh@55 -- # [[ 2024/04/26 13:29:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20127 serial_number:`A.K/'PtF|^Iw,s&;VHMF], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN `A.K/'PtF|^Iw,s&;VHMF 00:18:51.717 request: 00:18:51.717 { 00:18:51.717 "method": "nvmf_create_subsystem", 00:18:51.717 "params": { 00:18:51.717 "nqn": "nqn.2016-06.io.spdk:cnode20127", 00:18:51.717 "serial_number": "`A.K/'PtF|^Iw,s&;VHMF" 00:18:51.717 } 00:18:51.717 } 00:18:51.717 Got JSON-RPC error response 00:18:51.717 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:51.717 13:29:09 -- target/invalid.sh@58 -- # gen_random_s 41 00:18:51.717 13:29:09 -- target/invalid.sh@19 -- # local length=41 ll 00:18:51.717 13:29:09 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:51.717 13:29:09 -- target/invalid.sh@21 -- # local chars 00:18:51.717 13:29:09 -- target/invalid.sh@22 -- # local string 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 110 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=n 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 111 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=o 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 55 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=7 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 100 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=d 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 67 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x43' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=C 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 47 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=/ 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 46 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=. 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 35 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+='#' 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 66 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=B 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 37 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=% 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 118 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=v 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 113 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=q 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 104 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=h 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.717 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # printf %x 100 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:51.717 13:29:09 -- target/invalid.sh@25 -- # string+=d 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 53 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=5 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 104 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=h 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 110 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=n 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 114 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=r 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 47 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=/ 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 86 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=V 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 90 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=Z 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 111 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=o 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 123 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+='{' 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 51 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=3 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 78 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=N 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 37 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=% 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 73 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=I 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 61 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+== 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 96 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+='`' 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 89 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=Y 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 39 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=\' 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 119 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=w 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 51 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=3 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 121 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # string+=y 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:51.718 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:51.718 13:29:09 -- target/invalid.sh@25 -- # printf %x 79 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # string+=O 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # printf %x 88 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # string+=X 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # printf %x 57 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # string+=9 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # printf %x 88 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # string+=X 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # printf %x 54 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # string+=6 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # printf %x 96 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # string+='`' 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # printf %x 124 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:52.019 13:29:09 -- target/invalid.sh@25 -- # string+='|' 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll++ )) 00:18:52.019 13:29:09 -- target/invalid.sh@24 -- # (( ll < length )) 00:18:52.019 13:29:09 -- target/invalid.sh@28 -- # [[ n == \- ]] 00:18:52.019 13:29:09 -- target/invalid.sh@31 -- # echo 'no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'\''w3yOX9X6`|' 00:18:52.019 13:29:09 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'\''w3yOX9X6`|' nqn.2016-06.io.spdk:cnode18283 00:18:52.019 [2024-04-26 13:29:09.459365] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18283: invalid model number 'no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'w3yOX9X6`|' 00:18:52.278 13:29:09 -- target/invalid.sh@58 -- # out='2024/04/26 13:29:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'\''w3yOX9X6`| nqn:nqn.2016-06.io.spdk:cnode18283], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'\''w3yOX9X6`| 00:18:52.278 request: 00:18:52.278 { 00:18:52.278 "method": "nvmf_create_subsystem", 00:18:52.278 "params": { 00:18:52.278 "nqn": "nqn.2016-06.io.spdk:cnode18283", 00:18:52.278 "model_number": "no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'\''w3yOX9X6`|" 00:18:52.278 } 00:18:52.278 } 00:18:52.278 Got JSON-RPC error response 00:18:52.278 GoRPCClient: error on JSON-RPC call' 00:18:52.278 13:29:09 -- target/invalid.sh@59 -- # [[ 2024/04/26 13:29:09 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'w3yOX9X6`| nqn:nqn.2016-06.io.spdk:cnode18283], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'w3yOX9X6`| 00:18:52.278 request: 00:18:52.278 { 00:18:52.278 "method": "nvmf_create_subsystem", 00:18:52.278 "params": { 00:18:52.278 "nqn": "nqn.2016-06.io.spdk:cnode18283", 00:18:52.278 "model_number": "no7dC/.#B%vqhd5hnr/VZo{3N%I=`Y'w3yOX9X6`|" 00:18:52.278 } 00:18:52.278 } 00:18:52.278 Got JSON-RPC error response 00:18:52.278 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:52.278 13:29:09 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:52.278 [2024-04-26 13:29:09.723651] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:52.536 13:29:09 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:52.793 13:29:10 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:52.793 13:29:10 -- target/invalid.sh@67 -- # head -n 1 00:18:52.794 13:29:10 -- target/invalid.sh@67 -- # echo '' 00:18:52.794 13:29:10 -- target/invalid.sh@67 -- # IP= 00:18:52.794 13:29:10 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:52.794 [2024-04-26 13:29:10.239547] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:53.052 13:29:10 -- target/invalid.sh@69 -- # out='2024/04/26 13:29:10 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:18:53.052 request: 00:18:53.052 { 00:18:53.052 "method": "nvmf_subsystem_remove_listener", 00:18:53.052 "params": { 00:18:53.052 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:53.052 "listen_address": { 00:18:53.052 "trtype": "tcp", 00:18:53.052 "traddr": "", 00:18:53.052 "trsvcid": "4421" 00:18:53.052 } 00:18:53.052 } 00:18:53.052 } 00:18:53.052 Got JSON-RPC error response 00:18:53.052 GoRPCClient: error on JSON-RPC call' 00:18:53.052 13:29:10 -- target/invalid.sh@70 -- # [[ 2024/04/26 13:29:10 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:18:53.052 request: 00:18:53.052 { 00:18:53.052 "method": "nvmf_subsystem_remove_listener", 00:18:53.052 "params": { 00:18:53.052 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:53.052 "listen_address": { 00:18:53.052 "trtype": "tcp", 00:18:53.052 "traddr": "", 00:18:53.052 "trsvcid": "4421" 00:18:53.052 } 00:18:53.052 } 00:18:53.052 } 00:18:53.052 Got JSON-RPC error response 00:18:53.052 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:53.052 13:29:10 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27001 -i 0 00:18:53.310 [2024-04-26 13:29:10.539753] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27001: invalid cntlid range [0-65519] 00:18:53.310 13:29:10 -- target/invalid.sh@73 -- # out='2024/04/26 13:29:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode27001], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:18:53.310 request: 00:18:53.310 { 00:18:53.310 "method": "nvmf_create_subsystem", 00:18:53.310 "params": { 00:18:53.310 "nqn": "nqn.2016-06.io.spdk:cnode27001", 00:18:53.310 "min_cntlid": 0 00:18:53.310 } 00:18:53.310 } 00:18:53.310 Got JSON-RPC error response 00:18:53.310 GoRPCClient: error on JSON-RPC call' 00:18:53.310 13:29:10 -- target/invalid.sh@74 -- # [[ 2024/04/26 13:29:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode27001], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:18:53.310 request: 00:18:53.310 { 00:18:53.310 "method": "nvmf_create_subsystem", 00:18:53.310 "params": { 00:18:53.310 "nqn": "nqn.2016-06.io.spdk:cnode27001", 00:18:53.310 "min_cntlid": 0 00:18:53.310 } 00:18:53.310 } 00:18:53.310 Got JSON-RPC error response 00:18:53.310 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:53.310 13:29:10 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14891 -i 65520 00:18:53.567 [2024-04-26 13:29:10.816018] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14891: invalid cntlid range [65520-65519] 00:18:53.567 13:29:10 -- target/invalid.sh@75 -- # out='2024/04/26 13:29:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14891], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:18:53.567 request: 00:18:53.567 { 00:18:53.567 "method": "nvmf_create_subsystem", 00:18:53.567 "params": { 00:18:53.567 "nqn": "nqn.2016-06.io.spdk:cnode14891", 00:18:53.567 "min_cntlid": 65520 00:18:53.567 } 00:18:53.567 } 00:18:53.567 Got JSON-RPC error response 00:18:53.567 GoRPCClient: error on JSON-RPC call' 00:18:53.567 13:29:10 -- target/invalid.sh@76 -- # [[ 2024/04/26 13:29:10 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode14891], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:18:53.567 request: 00:18:53.567 { 00:18:53.568 "method": "nvmf_create_subsystem", 00:18:53.568 "params": { 00:18:53.568 "nqn": "nqn.2016-06.io.spdk:cnode14891", 00:18:53.568 "min_cntlid": 65520 00:18:53.568 } 00:18:53.568 } 00:18:53.568 Got JSON-RPC error response 00:18:53.568 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:53.568 13:29:10 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12573 -I 0 00:18:53.825 [2024-04-26 13:29:11.112292] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12573: invalid cntlid range [1-0] 00:18:53.825 13:29:11 -- target/invalid.sh@77 -- # out='2024/04/26 13:29:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12573], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:18:53.825 request: 00:18:53.825 { 00:18:53.825 "method": "nvmf_create_subsystem", 00:18:53.825 "params": { 00:18:53.825 "nqn": "nqn.2016-06.io.spdk:cnode12573", 00:18:53.825 "max_cntlid": 0 00:18:53.825 } 00:18:53.825 } 00:18:53.825 Got JSON-RPC error response 00:18:53.825 GoRPCClient: error on JSON-RPC call' 00:18:53.825 13:29:11 -- target/invalid.sh@78 -- # [[ 2024/04/26 13:29:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12573], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:18:53.825 request: 00:18:53.825 { 00:18:53.825 "method": "nvmf_create_subsystem", 00:18:53.825 "params": { 00:18:53.825 "nqn": "nqn.2016-06.io.spdk:cnode12573", 00:18:53.825 "max_cntlid": 0 00:18:53.825 } 00:18:53.825 } 00:18:53.825 Got JSON-RPC error response 00:18:53.825 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:53.825 13:29:11 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29860 -I 65520 00:18:54.084 [2024-04-26 13:29:11.404555] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29860: invalid cntlid range [1-65520] 00:18:54.084 13:29:11 -- target/invalid.sh@79 -- # out='2024/04/26 13:29:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29860], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:18:54.084 request: 00:18:54.084 { 00:18:54.084 "method": "nvmf_create_subsystem", 00:18:54.084 "params": { 00:18:54.084 "nqn": "nqn.2016-06.io.spdk:cnode29860", 00:18:54.084 "max_cntlid": 65520 00:18:54.084 } 00:18:54.084 } 00:18:54.084 Got JSON-RPC error response 00:18:54.084 GoRPCClient: error on JSON-RPC call' 00:18:54.084 13:29:11 -- target/invalid.sh@80 -- # [[ 2024/04/26 13:29:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode29860], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:18:54.084 request: 00:18:54.084 { 00:18:54.084 "method": "nvmf_create_subsystem", 00:18:54.084 "params": { 00:18:54.084 "nqn": "nqn.2016-06.io.spdk:cnode29860", 00:18:54.084 "max_cntlid": 65520 00:18:54.084 } 00:18:54.084 } 00:18:54.084 Got JSON-RPC error response 00:18:54.084 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:54.084 13:29:11 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7628 -i 6 -I 5 00:18:54.342 [2024-04-26 13:29:11.708161] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7628: invalid cntlid range [6-5] 00:18:54.342 13:29:11 -- target/invalid.sh@83 -- # out='2024/04/26 13:29:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7628], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:18:54.342 request: 00:18:54.342 { 00:18:54.342 "method": "nvmf_create_subsystem", 00:18:54.342 "params": { 00:18:54.342 "nqn": "nqn.2016-06.io.spdk:cnode7628", 00:18:54.342 "min_cntlid": 6, 00:18:54.342 "max_cntlid": 5 00:18:54.342 } 00:18:54.342 } 00:18:54.342 Got JSON-RPC error response 00:18:54.342 GoRPCClient: error on JSON-RPC call' 00:18:54.342 13:29:11 -- target/invalid.sh@84 -- # [[ 2024/04/26 13:29:11 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7628], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:18:54.342 request: 00:18:54.342 { 00:18:54.342 "method": "nvmf_create_subsystem", 00:18:54.342 "params": { 00:18:54.342 "nqn": "nqn.2016-06.io.spdk:cnode7628", 00:18:54.342 "min_cntlid": 6, 00:18:54.342 "max_cntlid": 5 00:18:54.343 } 00:18:54.343 } 00:18:54.343 Got JSON-RPC error response 00:18:54.343 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:54.343 13:29:11 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:54.602 13:29:11 -- target/invalid.sh@87 -- # out='request: 00:18:54.602 { 00:18:54.602 "name": "foobar", 00:18:54.602 "method": "nvmf_delete_target", 00:18:54.602 "req_id": 1 00:18:54.602 } 00:18:54.602 Got JSON-RPC error response 00:18:54.602 response: 00:18:54.602 { 00:18:54.602 "code": -32602, 00:18:54.602 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:54.602 }' 00:18:54.602 13:29:11 -- target/invalid.sh@88 -- # [[ request: 00:18:54.602 { 00:18:54.602 "name": "foobar", 00:18:54.602 "method": "nvmf_delete_target", 00:18:54.602 "req_id": 1 00:18:54.602 } 00:18:54.602 Got JSON-RPC error response 00:18:54.602 response: 00:18:54.602 { 00:18:54.602 "code": -32602, 00:18:54.602 "message": "The specified target doesn't exist, cannot delete it." 00:18:54.602 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:54.602 13:29:11 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:54.602 13:29:11 -- target/invalid.sh@91 -- # nvmftestfini 00:18:54.602 13:29:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:54.602 13:29:11 -- nvmf/common.sh@117 -- # sync 00:18:54.602 13:29:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:54.602 13:29:11 -- nvmf/common.sh@120 -- # set +e 00:18:54.602 13:29:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:54.602 13:29:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:54.602 rmmod nvme_tcp 00:18:54.602 rmmod nvme_fabrics 00:18:54.602 rmmod nvme_keyring 00:18:54.602 13:29:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:54.602 13:29:11 -- nvmf/common.sh@124 -- # set -e 00:18:54.602 13:29:11 -- nvmf/common.sh@125 -- # return 0 00:18:54.602 13:29:11 -- nvmf/common.sh@478 -- # '[' -n 67863 ']' 00:18:54.602 13:29:11 -- nvmf/common.sh@479 -- # killprocess 67863 00:18:54.602 13:29:11 -- common/autotest_common.sh@936 -- # '[' -z 67863 ']' 00:18:54.602 13:29:11 -- common/autotest_common.sh@940 -- # kill -0 67863 00:18:54.602 13:29:11 -- common/autotest_common.sh@941 -- # uname 00:18:54.602 13:29:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:54.602 13:29:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67863 00:18:54.602 killing process with pid 67863 00:18:54.602 13:29:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:54.602 13:29:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:54.602 13:29:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67863' 00:18:54.602 13:29:11 -- common/autotest_common.sh@955 -- # kill 67863 00:18:54.602 13:29:11 -- common/autotest_common.sh@960 -- # wait 67863 00:18:54.861 13:29:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:54.861 13:29:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:54.861 13:29:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:54.861 13:29:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.861 13:29:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.861 13:29:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.861 13:29:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.861 13:29:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:54.861 13:29:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:54.861 00:18:54.861 real 0m6.057s 00:18:54.861 user 0m24.216s 00:18:54.861 sys 0m1.310s 00:18:54.861 13:29:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:54.861 13:29:12 -- common/autotest_common.sh@10 -- # set +x 00:18:54.861 ************************************ 00:18:54.861 END TEST nvmf_invalid 00:18:54.861 ************************************ 00:18:55.119 13:29:12 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:55.119 13:29:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:55.119 13:29:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:55.119 13:29:12 -- common/autotest_common.sh@10 -- # set +x 00:18:55.119 ************************************ 00:18:55.119 START TEST nvmf_abort 00:18:55.119 ************************************ 00:18:55.119 13:29:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:55.119 * Looking for test storage... 00:18:55.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:55.119 13:29:12 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:55.119 13:29:12 -- nvmf/common.sh@7 -- # uname -s 00:18:55.119 13:29:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.119 13:29:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.119 13:29:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.119 13:29:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.119 13:29:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.119 13:29:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.119 13:29:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.119 13:29:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.119 13:29:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.119 13:29:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.119 13:29:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:55.119 13:29:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:55.119 13:29:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.119 13:29:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.119 13:29:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:55.119 13:29:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.119 13:29:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.119 13:29:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.119 13:29:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.120 13:29:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.120 13:29:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.120 13:29:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.120 13:29:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.120 13:29:12 -- paths/export.sh@5 -- # export PATH 00:18:55.120 13:29:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.120 13:29:12 -- nvmf/common.sh@47 -- # : 0 00:18:55.120 13:29:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.120 13:29:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.120 13:29:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.120 13:29:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.120 13:29:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.120 13:29:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.120 13:29:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.120 13:29:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.120 13:29:12 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.120 13:29:12 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:18:55.120 13:29:12 -- target/abort.sh@14 -- # nvmftestinit 00:18:55.120 13:29:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:55.120 13:29:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.120 13:29:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:55.120 13:29:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:55.120 13:29:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:55.120 13:29:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.120 13:29:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.120 13:29:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.120 13:29:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:55.120 13:29:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:55.120 13:29:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:55.120 13:29:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:55.120 13:29:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:55.120 13:29:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:55.120 13:29:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.120 13:29:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.120 13:29:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:55.120 13:29:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:55.120 13:29:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:55.120 13:29:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:55.120 13:29:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:55.120 13:29:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.120 13:29:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:55.120 13:29:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:55.120 13:29:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:55.120 13:29:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:55.120 13:29:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:55.120 13:29:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:55.120 Cannot find device "nvmf_tgt_br" 00:18:55.120 13:29:12 -- nvmf/common.sh@155 -- # true 00:18:55.120 13:29:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:55.120 Cannot find device "nvmf_tgt_br2" 00:18:55.120 13:29:12 -- nvmf/common.sh@156 -- # true 00:18:55.120 13:29:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:55.120 13:29:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:55.120 Cannot find device "nvmf_tgt_br" 00:18:55.120 13:29:12 -- nvmf/common.sh@158 -- # true 00:18:55.120 13:29:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:55.120 Cannot find device "nvmf_tgt_br2" 00:18:55.120 13:29:12 -- nvmf/common.sh@159 -- # true 00:18:55.120 13:29:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:55.378 13:29:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:55.378 13:29:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:55.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.378 13:29:12 -- nvmf/common.sh@162 -- # true 00:18:55.378 13:29:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:55.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:55.378 13:29:12 -- nvmf/common.sh@163 -- # true 00:18:55.378 13:29:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:55.378 13:29:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:55.378 13:29:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:55.378 13:29:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:55.379 13:29:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:55.379 13:29:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:55.379 13:29:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:55.379 13:29:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:55.379 13:29:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:55.379 13:29:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:55.379 13:29:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:55.379 13:29:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:55.379 13:29:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:55.379 13:29:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:55.379 13:29:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:55.379 13:29:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:55.379 13:29:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:55.379 13:29:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:55.379 13:29:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:55.379 13:29:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:55.379 13:29:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:55.379 13:29:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:55.379 13:29:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:55.379 13:29:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:55.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:18:55.379 00:18:55.379 --- 10.0.0.2 ping statistics --- 00:18:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.379 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:55.379 13:29:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:55.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:55.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:18:55.379 00:18:55.379 --- 10.0.0.3 ping statistics --- 00:18:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.379 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:18:55.379 13:29:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:55.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:55.379 00:18:55.379 --- 10.0.0.1 ping statistics --- 00:18:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.379 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:55.379 13:29:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.379 13:29:12 -- nvmf/common.sh@422 -- # return 0 00:18:55.379 13:29:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:55.379 13:29:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.379 13:29:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:55.379 13:29:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:55.379 13:29:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.379 13:29:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:55.379 13:29:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:55.379 13:29:12 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:18:55.379 13:29:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:55.379 13:29:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:55.379 13:29:12 -- common/autotest_common.sh@10 -- # set +x 00:18:55.379 13:29:12 -- nvmf/common.sh@470 -- # nvmfpid=68377 00:18:55.379 13:29:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:55.379 13:29:12 -- nvmf/common.sh@471 -- # waitforlisten 68377 00:18:55.379 13:29:12 -- common/autotest_common.sh@817 -- # '[' -z 68377 ']' 00:18:55.379 13:29:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.379 13:29:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.379 13:29:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.379 13:29:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:55.379 13:29:12 -- common/autotest_common.sh@10 -- # set +x 00:18:55.659 [2024-04-26 13:29:12.876185] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:18:55.659 [2024-04-26 13:29:12.876346] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.659 [2024-04-26 13:29:13.016920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:55.918 [2024-04-26 13:29:13.138516] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.918 [2024-04-26 13:29:13.138577] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.918 [2024-04-26 13:29:13.138589] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.918 [2024-04-26 13:29:13.138598] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.918 [2024-04-26 13:29:13.138606] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.918 [2024-04-26 13:29:13.138994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.918 [2024-04-26 13:29:13.139277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:55.918 [2024-04-26 13:29:13.139283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.483 13:29:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:56.483 13:29:13 -- common/autotest_common.sh@850 -- # return 0 00:18:56.483 13:29:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:56.483 13:29:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:56.483 13:29:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.483 13:29:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.483 13:29:13 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:18:56.483 13:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.483 13:29:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.741 [2024-04-26 13:29:13.937542] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.741 13:29:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.741 13:29:13 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:18:56.741 13:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.741 13:29:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.741 Malloc0 00:18:56.741 13:29:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.741 13:29:13 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:56.741 13:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.741 13:29:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.741 Delay0 00:18:56.741 13:29:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.741 13:29:13 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:56.741 13:29:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.741 13:29:13 -- common/autotest_common.sh@10 -- # set +x 00:18:56.742 13:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.742 13:29:14 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:18:56.742 13:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.742 13:29:14 -- common/autotest_common.sh@10 -- # set +x 00:18:56.742 13:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.742 13:29:14 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:56.742 13:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.742 13:29:14 -- common/autotest_common.sh@10 -- # set +x 00:18:56.742 [2024-04-26 13:29:14.020040] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.742 13:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.742 13:29:14 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:56.742 13:29:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.742 13:29:14 -- common/autotest_common.sh@10 -- # set +x 00:18:56.742 13:29:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.742 13:29:14 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:18:57.000 [2024-04-26 13:29:14.206324] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:58.903 Initializing NVMe Controllers 00:18:58.903 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:58.903 controller IO queue size 128 less than required 00:18:58.903 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:18:58.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:58.903 Initialization complete. Launching workers. 00:18:58.903 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31219 00:18:58.903 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31280, failed to submit 62 00:18:58.903 success 31223, unsuccess 57, failed 0 00:18:58.903 13:29:16 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:58.903 13:29:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:58.903 13:29:16 -- common/autotest_common.sh@10 -- # set +x 00:18:58.903 13:29:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:58.903 13:29:16 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:58.903 13:29:16 -- target/abort.sh@38 -- # nvmftestfini 00:18:58.903 13:29:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:58.903 13:29:16 -- nvmf/common.sh@117 -- # sync 00:18:58.903 13:29:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:58.903 13:29:16 -- nvmf/common.sh@120 -- # set +e 00:18:58.903 13:29:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:58.903 13:29:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:58.903 rmmod nvme_tcp 00:18:58.903 rmmod nvme_fabrics 00:18:58.903 rmmod nvme_keyring 00:18:58.903 13:29:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:58.903 13:29:16 -- nvmf/common.sh@124 -- # set -e 00:18:58.903 13:29:16 -- nvmf/common.sh@125 -- # return 0 00:18:58.903 13:29:16 -- nvmf/common.sh@478 -- # '[' -n 68377 ']' 00:18:58.903 13:29:16 -- nvmf/common.sh@479 -- # killprocess 68377 00:18:58.903 13:29:16 -- common/autotest_common.sh@936 -- # '[' -z 68377 ']' 00:18:58.903 13:29:16 -- common/autotest_common.sh@940 -- # kill -0 68377 00:18:58.903 13:29:16 -- common/autotest_common.sh@941 -- # uname 00:18:58.903 13:29:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:58.903 13:29:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68377 00:18:59.232 13:29:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:59.232 13:29:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:59.232 13:29:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68377' 00:18:59.232 killing process with pid 68377 00:18:59.232 13:29:16 -- common/autotest_common.sh@955 -- # kill 68377 00:18:59.232 13:29:16 -- common/autotest_common.sh@960 -- # wait 68377 00:18:59.232 13:29:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:59.232 13:29:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:59.232 13:29:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:59.232 13:29:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:59.232 13:29:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:59.232 13:29:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.232 13:29:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.232 13:29:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.493 13:29:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:59.493 00:18:59.493 real 0m4.294s 00:18:59.493 user 0m12.431s 00:18:59.493 sys 0m0.981s 00:18:59.493 13:29:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:59.493 13:29:16 -- common/autotest_common.sh@10 -- # set +x 00:18:59.493 ************************************ 00:18:59.493 END TEST nvmf_abort 00:18:59.493 ************************************ 00:18:59.493 13:29:16 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:59.493 13:29:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:59.493 13:29:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.493 13:29:16 -- common/autotest_common.sh@10 -- # set +x 00:18:59.493 ************************************ 00:18:59.493 START TEST nvmf_ns_hotplug_stress 00:18:59.493 ************************************ 00:18:59.493 13:29:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:59.493 * Looking for test storage... 00:18:59.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:59.493 13:29:16 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:59.493 13:29:16 -- nvmf/common.sh@7 -- # uname -s 00:18:59.493 13:29:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.493 13:29:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.493 13:29:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.493 13:29:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.493 13:29:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.493 13:29:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.493 13:29:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.493 13:29:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.493 13:29:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.493 13:29:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.493 13:29:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:59.493 13:29:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:18:59.493 13:29:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.493 13:29:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.493 13:29:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:59.493 13:29:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.493 13:29:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:59.493 13:29:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.493 13:29:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.493 13:29:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.493 13:29:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.493 13:29:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.493 13:29:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.493 13:29:16 -- paths/export.sh@5 -- # export PATH 00:18:59.493 13:29:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.493 13:29:16 -- nvmf/common.sh@47 -- # : 0 00:18:59.493 13:29:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.493 13:29:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.493 13:29:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.493 13:29:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.493 13:29:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.493 13:29:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.493 13:29:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.493 13:29:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.493 13:29:16 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.493 13:29:16 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:18:59.493 13:29:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:59.493 13:29:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.493 13:29:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:59.493 13:29:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:59.493 13:29:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:59.493 13:29:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.493 13:29:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.493 13:29:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.493 13:29:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:59.493 13:29:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:59.493 13:29:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:59.493 13:29:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:59.493 13:29:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:59.493 13:29:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:59.493 13:29:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.493 13:29:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.493 13:29:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:59.493 13:29:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:59.493 13:29:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:59.493 13:29:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:59.493 13:29:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:59.493 13:29:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.493 13:29:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:59.493 13:29:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:59.493 13:29:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:59.493 13:29:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:59.493 13:29:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:59.493 13:29:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:59.751 Cannot find device "nvmf_tgt_br" 00:18:59.751 13:29:16 -- nvmf/common.sh@155 -- # true 00:18:59.751 13:29:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:59.751 Cannot find device "nvmf_tgt_br2" 00:18:59.751 13:29:16 -- nvmf/common.sh@156 -- # true 00:18:59.751 13:29:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:59.751 13:29:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:59.751 Cannot find device "nvmf_tgt_br" 00:18:59.751 13:29:16 -- nvmf/common.sh@158 -- # true 00:18:59.751 13:29:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:59.751 Cannot find device "nvmf_tgt_br2" 00:18:59.751 13:29:16 -- nvmf/common.sh@159 -- # true 00:18:59.751 13:29:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:59.752 13:29:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:59.752 13:29:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:59.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:59.752 13:29:17 -- nvmf/common.sh@162 -- # true 00:18:59.752 13:29:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:59.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:59.752 13:29:17 -- nvmf/common.sh@163 -- # true 00:18:59.752 13:29:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:59.752 13:29:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:59.752 13:29:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:59.752 13:29:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:59.752 13:29:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:59.752 13:29:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:59.752 13:29:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:59.752 13:29:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:59.752 13:29:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:59.752 13:29:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:59.752 13:29:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:59.752 13:29:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:59.752 13:29:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:59.752 13:29:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:59.752 13:29:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:59.752 13:29:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:59.752 13:29:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:59.752 13:29:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:59.752 13:29:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:59.752 13:29:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:59.752 13:29:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:00.010 13:29:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:00.010 13:29:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:00.010 13:29:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:00.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:00.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:19:00.010 00:19:00.010 --- 10.0.0.2 ping statistics --- 00:19:00.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.010 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:00.010 13:29:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:00.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:00.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:19:00.010 00:19:00.010 --- 10.0.0.3 ping statistics --- 00:19:00.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.010 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:00.010 13:29:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:00.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:00.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:19:00.010 00:19:00.010 --- 10.0.0.1 ping statistics --- 00:19:00.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:00.010 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:00.011 13:29:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:00.011 13:29:17 -- nvmf/common.sh@422 -- # return 0 00:19:00.011 13:29:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:00.011 13:29:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:00.011 13:29:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:00.011 13:29:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:00.011 13:29:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:00.011 13:29:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:00.011 13:29:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:00.011 13:29:17 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:19:00.011 13:29:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:00.011 13:29:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:00.011 13:29:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.011 13:29:17 -- nvmf/common.sh@470 -- # nvmfpid=68642 00:19:00.011 13:29:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:00.011 13:29:17 -- nvmf/common.sh@471 -- # waitforlisten 68642 00:19:00.011 13:29:17 -- common/autotest_common.sh@817 -- # '[' -z 68642 ']' 00:19:00.011 13:29:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.011 13:29:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:00.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.011 13:29:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.011 13:29:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:00.011 13:29:17 -- common/autotest_common.sh@10 -- # set +x 00:19:00.011 [2024-04-26 13:29:17.307395] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:00.011 [2024-04-26 13:29:17.307487] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.011 [2024-04-26 13:29:17.447085] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.270 [2024-04-26 13:29:17.576457] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:00.270 [2024-04-26 13:29:17.576583] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:00.270 [2024-04-26 13:29:17.576611] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:00.270 [2024-04-26 13:29:17.576631] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:00.270 [2024-04-26 13:29:17.576651] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:00.270 [2024-04-26 13:29:17.576884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.270 [2024-04-26 13:29:17.577682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:00.270 [2024-04-26 13:29:17.577702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.204 13:29:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.204 13:29:18 -- common/autotest_common.sh@850 -- # return 0 00:19:01.204 13:29:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:01.204 13:29:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:01.204 13:29:18 -- common/autotest_common.sh@10 -- # set +x 00:19:01.204 13:29:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.204 13:29:18 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:19:01.204 13:29:18 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:01.204 [2024-04-26 13:29:18.636881] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.462 13:29:18 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:01.462 13:29:18 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.029 [2024-04-26 13:29:19.189486] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.029 13:29:19 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:02.316 13:29:19 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:19:02.316 Malloc0 00:19:02.574 13:29:19 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:02.574 Delay0 00:19:02.574 13:29:20 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:02.832 13:29:20 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:19:03.091 NULL1 00:19:03.091 13:29:20 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:03.349 13:29:20 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=68777 00:19:03.349 13:29:20 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:19:03.349 13:29:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:03.349 13:29:20 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.723 Read completed with error (sct=0, sc=11) 00:19:04.723 13:29:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:04.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:04.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:04.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:04.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:04.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:04.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:04.981 13:29:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:19:04.981 13:29:22 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:19:05.239 true 00:19:05.239 13:29:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:05.239 13:29:22 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:06.214 13:29:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:06.214 13:29:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:19:06.214 13:29:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:19:06.473 true 00:19:06.473 13:29:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:06.473 13:29:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:06.731 13:29:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:07.000 13:29:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:19:07.000 13:29:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:19:07.258 true 00:19:07.258 13:29:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:07.258 13:29:24 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.517 13:29:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:07.811 13:29:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:19:07.811 13:29:25 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:19:08.070 true 00:19:08.070 13:29:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:08.070 13:29:25 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.007 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:09.007 13:29:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:09.266 13:29:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:19:09.266 13:29:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:19:09.525 true 00:19:09.525 13:29:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:09.525 13:29:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.783 13:29:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:10.042 13:29:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:19:10.042 13:29:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:19:10.300 true 00:19:10.300 13:29:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:10.300 13:29:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.558 13:29:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:10.815 13:29:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:19:10.815 13:29:28 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:19:11.074 true 00:19:11.074 13:29:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:11.074 13:29:28 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.009 13:29:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:12.267 13:29:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:19:12.267 13:29:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:19:12.525 true 00:19:12.525 13:29:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:12.525 13:29:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.783 13:29:30 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:13.350 13:29:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:19:13.350 13:29:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:19:13.350 true 00:19:13.350 13:29:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:13.350 13:29:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:13.608 13:29:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:14.185 13:29:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:19:14.185 13:29:31 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:19:14.185 true 00:19:14.444 13:29:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:14.444 13:29:31 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.702 13:29:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:14.960 13:29:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:19:14.961 13:29:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:19:14.961 true 00:19:15.219 13:29:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:15.219 13:29:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.151 13:29:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:16.409 13:29:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:19:16.409 13:29:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:16.668 true 00:19:16.668 13:29:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:16.668 13:29:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.926 13:29:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:16.926 13:29:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:19:16.926 13:29:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:17.185 true 00:19:17.185 13:29:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:17.185 13:29:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:17.443 13:29:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:17.702 13:29:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:19:17.702 13:29:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:17.960 true 00:19:17.960 13:29:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:17.961 13:29:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.336 13:29:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:19.336 13:29:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:19:19.336 13:29:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:19.593 true 00:19:19.593 13:29:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:19.593 13:29:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.852 13:29:37 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:20.111 13:29:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:19:20.111 13:29:37 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:20.369 true 00:19:20.628 13:29:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:20.628 13:29:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:20.886 13:29:38 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:21.145 13:29:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:19:21.145 13:29:38 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:21.420 true 00:19:21.420 13:29:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:21.420 13:29:38 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:21.725 13:29:39 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:21.984 13:29:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:19:21.984 13:29:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:22.243 true 00:19:22.243 13:29:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:22.243 13:29:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.180 13:29:40 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:23.438 13:29:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:19:23.438 13:29:40 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:23.696 true 00:19:23.696 13:29:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:23.696 13:29:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.954 13:29:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.212 13:29:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:19:24.212 13:29:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:24.471 true 00:19:24.471 13:29:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:24.471 13:29:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.730 13:29:42 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:24.989 13:29:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:19:24.989 13:29:42 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:25.250 true 00:19:25.250 13:29:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:25.250 13:29:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.186 13:29:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:26.445 13:29:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:19:26.445 13:29:43 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:26.704 true 00:19:26.704 13:29:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:26.704 13:29:43 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.963 13:29:44 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:27.222 13:29:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:19:27.222 13:29:44 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:27.222 true 00:19:27.222 13:29:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:27.222 13:29:44 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.480 13:29:44 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:27.739 13:29:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:19:27.739 13:29:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:28.306 true 00:19:28.306 13:29:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:28.306 13:29:45 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.240 13:29:46 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:29.240 13:29:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:19:29.240 13:29:46 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:29.498 true 00:19:29.498 13:29:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:29.498 13:29:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.756 13:29:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:30.014 13:29:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:19:30.014 13:29:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:30.273 true 00:19:30.273 13:29:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:30.273 13:29:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:30.532 13:29:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:30.790 13:29:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:19:30.790 13:29:48 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:31.357 true 00:19:31.357 13:29:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:31.357 13:29:48 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.293 13:29:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:32.293 13:29:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:19:32.293 13:29:49 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:19:32.552 true 00:19:32.552 13:29:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:32.552 13:29:49 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.810 13:29:50 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:33.069 13:29:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:19:33.069 13:29:50 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:19:33.328 true 00:19:33.328 13:29:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:33.328 13:29:50 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.586 Initializing NVMe Controllers 00:19:33.586 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:33.586 Controller IO queue size 128, less than required. 00:19:33.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:33.586 Controller IO queue size 128, less than required. 00:19:33.586 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:33.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:33.586 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:33.586 Initialization complete. Launching workers. 00:19:33.586 ======================================================== 00:19:33.586 Latency(us) 00:19:33.586 Device Information : IOPS MiB/s Average min max 00:19:33.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 310.94 0.15 150396.43 3806.71 1124274.61 00:19:33.586 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7248.21 3.54 17660.37 3873.96 633221.56 00:19:33.586 ======================================================== 00:19:33.586 Total : 7559.15 3.69 23120.36 3806.71 1124274.61 00:19:33.586 00:19:33.845 13:29:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:34.103 13:29:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:19:34.103 13:29:51 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:19:34.388 true 00:19:34.388 13:29:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68777 00:19:34.388 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (68777) - No such process 00:19:34.388 13:29:51 -- target/ns_hotplug_stress.sh@44 -- # wait 68777 00:19:34.388 13:29:51 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:34.388 13:29:51 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:19:34.388 13:29:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:34.388 13:29:51 -- nvmf/common.sh@117 -- # sync 00:19:34.388 13:29:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:34.388 13:29:51 -- nvmf/common.sh@120 -- # set +e 00:19:34.388 13:29:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:34.388 13:29:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:34.388 rmmod nvme_tcp 00:19:34.388 rmmod nvme_fabrics 00:19:34.388 rmmod nvme_keyring 00:19:34.388 13:29:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.388 13:29:51 -- nvmf/common.sh@124 -- # set -e 00:19:34.388 13:29:51 -- nvmf/common.sh@125 -- # return 0 00:19:34.388 13:29:51 -- nvmf/common.sh@478 -- # '[' -n 68642 ']' 00:19:34.388 13:29:51 -- nvmf/common.sh@479 -- # killprocess 68642 00:19:34.388 13:29:51 -- common/autotest_common.sh@936 -- # '[' -z 68642 ']' 00:19:34.388 13:29:51 -- common/autotest_common.sh@940 -- # kill -0 68642 00:19:34.388 13:29:51 -- common/autotest_common.sh@941 -- # uname 00:19:34.388 13:29:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:34.388 13:29:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68642 00:19:34.388 killing process with pid 68642 00:19:34.388 13:29:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:34.388 13:29:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:34.388 13:29:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68642' 00:19:34.388 13:29:51 -- common/autotest_common.sh@955 -- # kill 68642 00:19:34.388 13:29:51 -- common/autotest_common.sh@960 -- # wait 68642 00:19:34.648 13:29:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:34.648 13:29:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:34.648 13:29:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:34.648 13:29:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.648 13:29:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:34.648 13:29:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.648 13:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.648 13:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.648 13:29:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:34.648 00:19:34.648 real 0m35.228s 00:19:34.648 user 2m31.604s 00:19:34.648 sys 0m8.206s 00:19:34.648 13:29:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:34.648 ************************************ 00:19:34.648 13:29:52 -- common/autotest_common.sh@10 -- # set +x 00:19:34.648 END TEST nvmf_ns_hotplug_stress 00:19:34.648 ************************************ 00:19:34.648 13:29:52 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:34.648 13:29:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:34.648 13:29:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:34.648 13:29:52 -- common/autotest_common.sh@10 -- # set +x 00:19:34.909 ************************************ 00:19:34.909 START TEST nvmf_connect_stress 00:19:34.909 ************************************ 00:19:34.909 13:29:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:34.909 * Looking for test storage... 00:19:34.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:34.909 13:29:52 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:34.909 13:29:52 -- nvmf/common.sh@7 -- # uname -s 00:19:34.909 13:29:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.909 13:29:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.909 13:29:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.909 13:29:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.909 13:29:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.909 13:29:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.909 13:29:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.909 13:29:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.909 13:29:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.909 13:29:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.909 13:29:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:19:34.909 13:29:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:19:34.909 13:29:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.909 13:29:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.909 13:29:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:34.909 13:29:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.909 13:29:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:34.909 13:29:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.909 13:29:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.909 13:29:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.909 13:29:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.909 13:29:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.909 13:29:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.909 13:29:52 -- paths/export.sh@5 -- # export PATH 00:19:34.909 13:29:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.909 13:29:52 -- nvmf/common.sh@47 -- # : 0 00:19:34.909 13:29:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:34.909 13:29:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:34.909 13:29:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.909 13:29:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.909 13:29:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.909 13:29:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:34.909 13:29:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:34.909 13:29:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:34.909 13:29:52 -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:34.909 13:29:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:34.909 13:29:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.909 13:29:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:34.909 13:29:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:34.909 13:29:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:34.909 13:29:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.909 13:29:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.909 13:29:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.909 13:29:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:34.909 13:29:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:34.909 13:29:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:34.909 13:29:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:34.909 13:29:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:34.909 13:29:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:34.909 13:29:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.909 13:29:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.909 13:29:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:34.909 13:29:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:34.909 13:29:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:34.909 13:29:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:34.909 13:29:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:34.909 13:29:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.909 13:29:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:34.909 13:29:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:34.909 13:29:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:34.909 13:29:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:34.909 13:29:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:34.909 13:29:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:34.909 Cannot find device "nvmf_tgt_br" 00:19:34.909 13:29:52 -- nvmf/common.sh@155 -- # true 00:19:34.909 13:29:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.909 Cannot find device "nvmf_tgt_br2" 00:19:34.909 13:29:52 -- nvmf/common.sh@156 -- # true 00:19:34.909 13:29:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:34.909 13:29:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:34.909 Cannot find device "nvmf_tgt_br" 00:19:34.909 13:29:52 -- nvmf/common.sh@158 -- # true 00:19:34.909 13:29:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:34.909 Cannot find device "nvmf_tgt_br2" 00:19:34.909 13:29:52 -- nvmf/common.sh@159 -- # true 00:19:34.909 13:29:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:35.168 13:29:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:35.168 13:29:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.168 13:29:52 -- nvmf/common.sh@162 -- # true 00:19:35.168 13:29:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.168 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.168 13:29:52 -- nvmf/common.sh@163 -- # true 00:19:35.168 13:29:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:35.168 13:29:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:35.168 13:29:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:35.168 13:29:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:35.168 13:29:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:35.168 13:29:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:35.168 13:29:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:35.168 13:29:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:35.168 13:29:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:35.168 13:29:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:35.168 13:29:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:35.168 13:29:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:35.168 13:29:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:35.168 13:29:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:35.168 13:29:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:35.168 13:29:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:35.168 13:29:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:35.168 13:29:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:35.168 13:29:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:35.169 13:29:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:35.169 13:29:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:35.169 13:29:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:35.169 13:29:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:35.169 13:29:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:35.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:35.169 00:19:35.169 --- 10.0.0.2 ping statistics --- 00:19:35.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.169 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:35.169 13:29:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:35.169 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:35.169 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:19:35.169 00:19:35.169 --- 10.0.0.3 ping statistics --- 00:19:35.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.169 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:35.169 13:29:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:35.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:19:35.169 00:19:35.169 --- 10.0.0.1 ping statistics --- 00:19:35.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.169 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:19:35.169 13:29:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.169 13:29:52 -- nvmf/common.sh@422 -- # return 0 00:19:35.169 13:29:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:35.169 13:29:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.169 13:29:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:35.169 13:29:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:35.169 13:29:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.169 13:29:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:35.169 13:29:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:35.428 13:29:52 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:35.428 13:29:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:35.428 13:29:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:35.428 13:29:52 -- common/autotest_common.sh@10 -- # set +x 00:19:35.428 13:29:52 -- nvmf/common.sh@470 -- # nvmfpid=69933 00:19:35.428 13:29:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:35.428 13:29:52 -- nvmf/common.sh@471 -- # waitforlisten 69933 00:19:35.428 13:29:52 -- common/autotest_common.sh@817 -- # '[' -z 69933 ']' 00:19:35.428 13:29:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.428 13:29:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:35.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.428 13:29:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.428 13:29:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:35.428 13:29:52 -- common/autotest_common.sh@10 -- # set +x 00:19:35.428 [2024-04-26 13:29:52.674676] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:35.428 [2024-04-26 13:29:52.674756] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.428 [2024-04-26 13:29:52.807200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:35.687 [2024-04-26 13:29:52.928888] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.687 [2024-04-26 13:29:52.928941] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.687 [2024-04-26 13:29:52.928953] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.687 [2024-04-26 13:29:52.928961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.687 [2024-04-26 13:29:52.928969] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.687 [2024-04-26 13:29:52.929135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.687 [2024-04-26 13:29:52.929864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.687 [2024-04-26 13:29:52.929872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.254 13:29:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:36.254 13:29:53 -- common/autotest_common.sh@850 -- # return 0 00:19:36.254 13:29:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:36.254 13:29:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:36.254 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:36.254 13:29:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.254 13:29:53 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.254 13:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.254 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:36.512 [2024-04-26 13:29:53.704975] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.512 13:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.512 13:29:53 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:36.512 13:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.512 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:36.512 13:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.512 13:29:53 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.512 13:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.512 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:36.512 [2024-04-26 13:29:53.725128] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.512 13:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.512 13:29:53 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:36.512 13:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.512 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:36.512 NULL1 00:19:36.512 13:29:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.512 13:29:53 -- target/connect_stress.sh@21 -- # PERF_PID=69985 00:19:36.512 13:29:53 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:36.512 13:29:53 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:36.513 13:29:53 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # seq 1 20 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:36.513 13:29:53 -- target/connect_stress.sh@28 -- # cat 00:19:36.513 13:29:53 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:36.513 13:29:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.513 13:29:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.513 13:29:53 -- common/autotest_common.sh@10 -- # set +x 00:19:36.772 13:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:36.772 13:29:54 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:36.772 13:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.772 13:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:36.772 13:29:54 -- common/autotest_common.sh@10 -- # set +x 00:19:37.030 13:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.030 13:29:54 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:37.030 13:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.030 13:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.030 13:29:54 -- common/autotest_common.sh@10 -- # set +x 00:19:37.597 13:29:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.597 13:29:54 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:37.597 13:29:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.597 13:29:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.597 13:29:54 -- common/autotest_common.sh@10 -- # set +x 00:19:37.855 13:29:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:37.855 13:29:55 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:37.855 13:29:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.855 13:29:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:37.855 13:29:55 -- common/autotest_common.sh@10 -- # set +x 00:19:38.113 13:29:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.113 13:29:55 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:38.113 13:29:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.113 13:29:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.113 13:29:55 -- common/autotest_common.sh@10 -- # set +x 00:19:38.372 13:29:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.372 13:29:55 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:38.372 13:29:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.372 13:29:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.372 13:29:55 -- common/autotest_common.sh@10 -- # set +x 00:19:38.939 13:29:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:38.939 13:29:56 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:38.939 13:29:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.939 13:29:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:38.939 13:29:56 -- common/autotest_common.sh@10 -- # set +x 00:19:39.197 13:29:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.197 13:29:56 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:39.197 13:29:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.197 13:29:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.197 13:29:56 -- common/autotest_common.sh@10 -- # set +x 00:19:39.456 13:29:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.456 13:29:56 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:39.456 13:29:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.456 13:29:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.456 13:29:56 -- common/autotest_common.sh@10 -- # set +x 00:19:39.714 13:29:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.714 13:29:57 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:39.714 13:29:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.714 13:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.714 13:29:57 -- common/autotest_common.sh@10 -- # set +x 00:19:39.973 13:29:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.973 13:29:57 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:39.973 13:29:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.973 13:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.973 13:29:57 -- common/autotest_common.sh@10 -- # set +x 00:19:40.539 13:29:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.539 13:29:57 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:40.539 13:29:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.539 13:29:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.539 13:29:57 -- common/autotest_common.sh@10 -- # set +x 00:19:40.797 13:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.797 13:29:58 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:40.797 13:29:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.797 13:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.797 13:29:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.055 13:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.055 13:29:58 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:41.055 13:29:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.055 13:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.055 13:29:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.314 13:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.314 13:29:58 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:41.314 13:29:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.314 13:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.314 13:29:58 -- common/autotest_common.sh@10 -- # set +x 00:19:41.572 13:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.572 13:29:58 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:41.572 13:29:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.572 13:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.572 13:29:58 -- common/autotest_common.sh@10 -- # set +x 00:19:42.139 13:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.139 13:29:59 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:42.139 13:29:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.139 13:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.139 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.397 13:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.397 13:29:59 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:42.397 13:29:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.397 13:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.397 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.655 13:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.655 13:29:59 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:42.655 13:29:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.655 13:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.655 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:19:42.914 13:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.914 13:30:00 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:42.914 13:30:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.914 13:30:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.914 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:19:43.172 13:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.172 13:30:00 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:43.172 13:30:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.172 13:30:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.172 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:19:43.739 13:30:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.739 13:30:00 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:43.739 13:30:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.739 13:30:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.739 13:30:00 -- common/autotest_common.sh@10 -- # set +x 00:19:43.997 13:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.997 13:30:01 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:43.997 13:30:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.997 13:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.997 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:44.255 13:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.255 13:30:01 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:44.255 13:30:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.255 13:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.255 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:44.512 13:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.512 13:30:01 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:44.513 13:30:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.513 13:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.513 13:30:01 -- common/autotest_common.sh@10 -- # set +x 00:19:44.770 13:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.770 13:30:02 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:44.770 13:30:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.770 13:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.770 13:30:02 -- common/autotest_common.sh@10 -- # set +x 00:19:45.335 13:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.335 13:30:02 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:45.335 13:30:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:45.335 13:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.335 13:30:02 -- common/autotest_common.sh@10 -- # set +x 00:19:45.594 13:30:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.594 13:30:02 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:45.594 13:30:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:45.594 13:30:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.594 13:30:02 -- common/autotest_common.sh@10 -- # set +x 00:19:45.852 13:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.852 13:30:03 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:45.852 13:30:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:45.852 13:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.852 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:19:46.109 13:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.109 13:30:03 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:46.109 13:30:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:46.109 13:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.109 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:19:46.404 13:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.404 13:30:03 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:46.404 13:30:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:46.404 13:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:46.404 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:19:46.680 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:46.680 13:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:46.680 13:30:04 -- target/connect_stress.sh@34 -- # kill -0 69985 00:19:46.680 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69985) - No such process 00:19:46.680 13:30:04 -- target/connect_stress.sh@38 -- # wait 69985 00:19:46.680 13:30:04 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:46.938 13:30:04 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:46.938 13:30:04 -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:46.938 13:30:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:46.938 13:30:04 -- nvmf/common.sh@117 -- # sync 00:19:46.938 13:30:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.938 13:30:04 -- nvmf/common.sh@120 -- # set +e 00:19:46.938 13:30:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.938 13:30:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.938 rmmod nvme_tcp 00:19:46.938 rmmod nvme_fabrics 00:19:46.938 rmmod nvme_keyring 00:19:46.938 13:30:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.938 13:30:04 -- nvmf/common.sh@124 -- # set -e 00:19:46.938 13:30:04 -- nvmf/common.sh@125 -- # return 0 00:19:46.938 13:30:04 -- nvmf/common.sh@478 -- # '[' -n 69933 ']' 00:19:46.938 13:30:04 -- nvmf/common.sh@479 -- # killprocess 69933 00:19:46.938 13:30:04 -- common/autotest_common.sh@936 -- # '[' -z 69933 ']' 00:19:46.938 13:30:04 -- common/autotest_common.sh@940 -- # kill -0 69933 00:19:46.938 13:30:04 -- common/autotest_common.sh@941 -- # uname 00:19:46.938 13:30:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:46.938 13:30:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69933 00:19:46.938 13:30:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:46.938 13:30:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:46.938 killing process with pid 69933 00:19:46.938 13:30:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69933' 00:19:46.938 13:30:04 -- common/autotest_common.sh@955 -- # kill 69933 00:19:46.938 13:30:04 -- common/autotest_common.sh@960 -- # wait 69933 00:19:47.195 13:30:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:47.195 13:30:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:47.195 13:30:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:47.195 13:30:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:47.195 13:30:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:47.195 13:30:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.195 13:30:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.195 13:30:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.195 13:30:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:47.195 00:19:47.195 real 0m12.386s 00:19:47.195 user 0m41.089s 00:19:47.195 sys 0m3.297s 00:19:47.195 13:30:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:47.195 13:30:04 -- common/autotest_common.sh@10 -- # set +x 00:19:47.195 ************************************ 00:19:47.195 END TEST nvmf_connect_stress 00:19:47.195 ************************************ 00:19:47.195 13:30:04 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:47.195 13:30:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:47.195 13:30:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:47.195 13:30:04 -- common/autotest_common.sh@10 -- # set +x 00:19:47.454 ************************************ 00:19:47.454 START TEST nvmf_fused_ordering 00:19:47.454 ************************************ 00:19:47.454 13:30:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:47.454 * Looking for test storage... 00:19:47.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:47.454 13:30:04 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.454 13:30:04 -- nvmf/common.sh@7 -- # uname -s 00:19:47.454 13:30:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.454 13:30:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.454 13:30:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.454 13:30:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.454 13:30:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.454 13:30:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.454 13:30:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.454 13:30:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.454 13:30:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.454 13:30:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.454 13:30:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:19:47.454 13:30:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:19:47.454 13:30:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.454 13:30:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.454 13:30:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:47.454 13:30:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.454 13:30:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.454 13:30:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.454 13:30:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.454 13:30:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.454 13:30:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.454 13:30:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.454 13:30:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.454 13:30:04 -- paths/export.sh@5 -- # export PATH 00:19:47.454 13:30:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.454 13:30:04 -- nvmf/common.sh@47 -- # : 0 00:19:47.454 13:30:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.454 13:30:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.454 13:30:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.454 13:30:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.454 13:30:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.454 13:30:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.454 13:30:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.454 13:30:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.454 13:30:04 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:47.454 13:30:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:47.454 13:30:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.454 13:30:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:47.454 13:30:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:47.454 13:30:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:47.454 13:30:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.454 13:30:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.454 13:30:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.454 13:30:04 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:47.454 13:30:04 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:47.454 13:30:04 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:47.454 13:30:04 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:47.454 13:30:04 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:47.454 13:30:04 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:47.454 13:30:04 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.454 13:30:04 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.454 13:30:04 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:47.454 13:30:04 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:47.454 13:30:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:47.454 13:30:04 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:47.454 13:30:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:47.454 13:30:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.454 13:30:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:47.454 13:30:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:47.454 13:30:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:47.454 13:30:04 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:47.454 13:30:04 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:47.454 13:30:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:47.454 Cannot find device "nvmf_tgt_br" 00:19:47.454 13:30:04 -- nvmf/common.sh@155 -- # true 00:19:47.454 13:30:04 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:47.454 Cannot find device "nvmf_tgt_br2" 00:19:47.454 13:30:04 -- nvmf/common.sh@156 -- # true 00:19:47.454 13:30:04 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:47.454 13:30:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:47.454 Cannot find device "nvmf_tgt_br" 00:19:47.454 13:30:04 -- nvmf/common.sh@158 -- # true 00:19:47.454 13:30:04 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:47.454 Cannot find device "nvmf_tgt_br2" 00:19:47.454 13:30:04 -- nvmf/common.sh@159 -- # true 00:19:47.454 13:30:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:47.454 13:30:04 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:47.454 13:30:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:47.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.454 13:30:04 -- nvmf/common.sh@162 -- # true 00:19:47.454 13:30:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:47.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:47.712 13:30:04 -- nvmf/common.sh@163 -- # true 00:19:47.712 13:30:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:47.712 13:30:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:47.712 13:30:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:47.712 13:30:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:47.712 13:30:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:47.712 13:30:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:47.712 13:30:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:47.712 13:30:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:47.712 13:30:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:47.712 13:30:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:47.712 13:30:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:47.712 13:30:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:47.712 13:30:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:47.712 13:30:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:47.712 13:30:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:47.712 13:30:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:47.712 13:30:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:47.712 13:30:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:47.712 13:30:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:47.712 13:30:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:47.712 13:30:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:47.712 13:30:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:47.712 13:30:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:47.712 13:30:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:47.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:19:47.712 00:19:47.712 --- 10.0.0.2 ping statistics --- 00:19:47.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.712 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:19:47.713 13:30:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:47.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:47.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:47.713 00:19:47.713 --- 10.0.0.3 ping statistics --- 00:19:47.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.713 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:47.713 13:30:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:47.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:47.713 00:19:47.713 --- 10.0.0.1 ping statistics --- 00:19:47.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.713 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:47.713 13:30:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.713 13:30:05 -- nvmf/common.sh@422 -- # return 0 00:19:47.713 13:30:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:47.713 13:30:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.713 13:30:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:47.713 13:30:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:47.713 13:30:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.713 13:30:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:47.713 13:30:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:47.713 13:30:05 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:47.713 13:30:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:47.713 13:30:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:47.713 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:47.713 13:30:05 -- nvmf/common.sh@470 -- # nvmfpid=70317 00:19:47.713 13:30:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:47.713 13:30:05 -- nvmf/common.sh@471 -- # waitforlisten 70317 00:19:47.713 13:30:05 -- common/autotest_common.sh@817 -- # '[' -z 70317 ']' 00:19:47.713 13:30:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.713 13:30:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:47.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.713 13:30:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.713 13:30:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:47.713 13:30:05 -- common/autotest_common.sh@10 -- # set +x 00:19:47.971 [2024-04-26 13:30:05.193669] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:47.971 [2024-04-26 13:30:05.193797] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.971 [2024-04-26 13:30:05.334164] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.229 [2024-04-26 13:30:05.462028] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.229 [2024-04-26 13:30:05.462105] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.229 [2024-04-26 13:30:05.462119] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.229 [2024-04-26 13:30:05.462130] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.229 [2024-04-26 13:30:05.462139] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.229 [2024-04-26 13:30:05.462186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.795 13:30:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:48.795 13:30:06 -- common/autotest_common.sh@850 -- # return 0 00:19:48.795 13:30:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:48.795 13:30:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:48.795 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.054 13:30:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.054 13:30:06 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.054 13:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.054 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.054 [2024-04-26 13:30:06.287254] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.054 13:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.054 13:30:06 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:49.054 13:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.054 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.054 13:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.054 13:30:06 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.054 13:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.054 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.054 [2024-04-26 13:30:06.303384] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.054 13:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.054 13:30:06 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:49.054 13:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.054 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.054 NULL1 00:19:49.054 13:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.054 13:30:06 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:49.054 13:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.054 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.054 13:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.054 13:30:06 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:49.054 13:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.054 13:30:06 -- common/autotest_common.sh@10 -- # set +x 00:19:49.054 13:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.054 13:30:06 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:49.054 [2024-04-26 13:30:06.359578] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:49.054 [2024-04-26 13:30:06.359652] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70367 ] 00:19:49.621 Attached to nqn.2016-06.io.spdk:cnode1 00:19:49.621 Namespace ID: 1 size: 1GB 00:19:49.621 fused_ordering(0) 00:19:49.621 fused_ordering(1) 00:19:49.621 fused_ordering(2) 00:19:49.621 fused_ordering(3) 00:19:49.621 fused_ordering(4) 00:19:49.621 fused_ordering(5) 00:19:49.621 fused_ordering(6) 00:19:49.621 fused_ordering(7) 00:19:49.621 fused_ordering(8) 00:19:49.621 fused_ordering(9) 00:19:49.621 fused_ordering(10) 00:19:49.621 fused_ordering(11) 00:19:49.621 fused_ordering(12) 00:19:49.621 fused_ordering(13) 00:19:49.621 fused_ordering(14) 00:19:49.621 fused_ordering(15) 00:19:49.621 fused_ordering(16) 00:19:49.621 fused_ordering(17) 00:19:49.621 fused_ordering(18) 00:19:49.621 fused_ordering(19) 00:19:49.621 fused_ordering(20) 00:19:49.621 fused_ordering(21) 00:19:49.621 fused_ordering(22) 00:19:49.621 fused_ordering(23) 00:19:49.621 fused_ordering(24) 00:19:49.621 fused_ordering(25) 00:19:49.621 fused_ordering(26) 00:19:49.621 fused_ordering(27) 00:19:49.621 fused_ordering(28) 00:19:49.621 fused_ordering(29) 00:19:49.621 fused_ordering(30) 00:19:49.621 fused_ordering(31) 00:19:49.621 fused_ordering(32) 00:19:49.621 fused_ordering(33) 00:19:49.621 fused_ordering(34) 00:19:49.621 fused_ordering(35) 00:19:49.621 fused_ordering(36) 00:19:49.621 fused_ordering(37) 00:19:49.621 fused_ordering(38) 00:19:49.621 fused_ordering(39) 00:19:49.621 fused_ordering(40) 00:19:49.621 fused_ordering(41) 00:19:49.621 fused_ordering(42) 00:19:49.621 fused_ordering(43) 00:19:49.621 fused_ordering(44) 00:19:49.621 fused_ordering(45) 00:19:49.621 fused_ordering(46) 00:19:49.621 fused_ordering(47) 00:19:49.621 fused_ordering(48) 00:19:49.621 fused_ordering(49) 00:19:49.621 fused_ordering(50) 00:19:49.621 fused_ordering(51) 00:19:49.621 fused_ordering(52) 00:19:49.621 fused_ordering(53) 00:19:49.621 fused_ordering(54) 00:19:49.621 fused_ordering(55) 00:19:49.621 fused_ordering(56) 00:19:49.621 fused_ordering(57) 00:19:49.621 fused_ordering(58) 00:19:49.621 fused_ordering(59) 00:19:49.621 fused_ordering(60) 00:19:49.621 fused_ordering(61) 00:19:49.621 fused_ordering(62) 00:19:49.621 fused_ordering(63) 00:19:49.621 fused_ordering(64) 00:19:49.621 fused_ordering(65) 00:19:49.621 fused_ordering(66) 00:19:49.621 fused_ordering(67) 00:19:49.621 fused_ordering(68) 00:19:49.621 fused_ordering(69) 00:19:49.621 fused_ordering(70) 00:19:49.621 fused_ordering(71) 00:19:49.621 fused_ordering(72) 00:19:49.621 fused_ordering(73) 00:19:49.621 fused_ordering(74) 00:19:49.621 fused_ordering(75) 00:19:49.621 fused_ordering(76) 00:19:49.621 fused_ordering(77) 00:19:49.621 fused_ordering(78) 00:19:49.621 fused_ordering(79) 00:19:49.621 fused_ordering(80) 00:19:49.621 fused_ordering(81) 00:19:49.621 fused_ordering(82) 00:19:49.621 fused_ordering(83) 00:19:49.621 fused_ordering(84) 00:19:49.621 fused_ordering(85) 00:19:49.621 fused_ordering(86) 00:19:49.621 fused_ordering(87) 00:19:49.621 fused_ordering(88) 00:19:49.621 fused_ordering(89) 00:19:49.621 fused_ordering(90) 00:19:49.621 fused_ordering(91) 00:19:49.621 fused_ordering(92) 00:19:49.621 fused_ordering(93) 00:19:49.621 fused_ordering(94) 00:19:49.621 fused_ordering(95) 00:19:49.621 fused_ordering(96) 00:19:49.621 fused_ordering(97) 00:19:49.621 fused_ordering(98) 00:19:49.621 fused_ordering(99) 00:19:49.621 fused_ordering(100) 00:19:49.621 fused_ordering(101) 00:19:49.621 fused_ordering(102) 00:19:49.621 fused_ordering(103) 00:19:49.621 fused_ordering(104) 00:19:49.621 fused_ordering(105) 00:19:49.621 fused_ordering(106) 00:19:49.621 fused_ordering(107) 00:19:49.621 fused_ordering(108) 00:19:49.621 fused_ordering(109) 00:19:49.621 fused_ordering(110) 00:19:49.621 fused_ordering(111) 00:19:49.621 fused_ordering(112) 00:19:49.621 fused_ordering(113) 00:19:49.621 fused_ordering(114) 00:19:49.621 fused_ordering(115) 00:19:49.621 fused_ordering(116) 00:19:49.621 fused_ordering(117) 00:19:49.621 fused_ordering(118) 00:19:49.621 fused_ordering(119) 00:19:49.621 fused_ordering(120) 00:19:49.621 fused_ordering(121) 00:19:49.621 fused_ordering(122) 00:19:49.621 fused_ordering(123) 00:19:49.621 fused_ordering(124) 00:19:49.621 fused_ordering(125) 00:19:49.621 fused_ordering(126) 00:19:49.621 fused_ordering(127) 00:19:49.621 fused_ordering(128) 00:19:49.621 fused_ordering(129) 00:19:49.621 fused_ordering(130) 00:19:49.621 fused_ordering(131) 00:19:49.621 fused_ordering(132) 00:19:49.621 fused_ordering(133) 00:19:49.621 fused_ordering(134) 00:19:49.621 fused_ordering(135) 00:19:49.621 fused_ordering(136) 00:19:49.621 fused_ordering(137) 00:19:49.621 fused_ordering(138) 00:19:49.621 fused_ordering(139) 00:19:49.621 fused_ordering(140) 00:19:49.621 fused_ordering(141) 00:19:49.621 fused_ordering(142) 00:19:49.621 fused_ordering(143) 00:19:49.621 fused_ordering(144) 00:19:49.621 fused_ordering(145) 00:19:49.621 fused_ordering(146) 00:19:49.621 fused_ordering(147) 00:19:49.621 fused_ordering(148) 00:19:49.621 fused_ordering(149) 00:19:49.621 fused_ordering(150) 00:19:49.621 fused_ordering(151) 00:19:49.621 fused_ordering(152) 00:19:49.621 fused_ordering(153) 00:19:49.621 fused_ordering(154) 00:19:49.621 fused_ordering(155) 00:19:49.621 fused_ordering(156) 00:19:49.621 fused_ordering(157) 00:19:49.621 fused_ordering(158) 00:19:49.621 fused_ordering(159) 00:19:49.621 fused_ordering(160) 00:19:49.621 fused_ordering(161) 00:19:49.621 fused_ordering(162) 00:19:49.621 fused_ordering(163) 00:19:49.621 fused_ordering(164) 00:19:49.621 fused_ordering(165) 00:19:49.621 fused_ordering(166) 00:19:49.621 fused_ordering(167) 00:19:49.621 fused_ordering(168) 00:19:49.621 fused_ordering(169) 00:19:49.621 fused_ordering(170) 00:19:49.621 fused_ordering(171) 00:19:49.621 fused_ordering(172) 00:19:49.621 fused_ordering(173) 00:19:49.621 fused_ordering(174) 00:19:49.621 fused_ordering(175) 00:19:49.621 fused_ordering(176) 00:19:49.621 fused_ordering(177) 00:19:49.621 fused_ordering(178) 00:19:49.621 fused_ordering(179) 00:19:49.621 fused_ordering(180) 00:19:49.621 fused_ordering(181) 00:19:49.621 fused_ordering(182) 00:19:49.621 fused_ordering(183) 00:19:49.621 fused_ordering(184) 00:19:49.621 fused_ordering(185) 00:19:49.621 fused_ordering(186) 00:19:49.621 fused_ordering(187) 00:19:49.621 fused_ordering(188) 00:19:49.621 fused_ordering(189) 00:19:49.621 fused_ordering(190) 00:19:49.621 fused_ordering(191) 00:19:49.621 fused_ordering(192) 00:19:49.621 fused_ordering(193) 00:19:49.621 fused_ordering(194) 00:19:49.621 fused_ordering(195) 00:19:49.621 fused_ordering(196) 00:19:49.621 fused_ordering(197) 00:19:49.621 fused_ordering(198) 00:19:49.621 fused_ordering(199) 00:19:49.621 fused_ordering(200) 00:19:49.621 fused_ordering(201) 00:19:49.621 fused_ordering(202) 00:19:49.621 fused_ordering(203) 00:19:49.621 fused_ordering(204) 00:19:49.621 fused_ordering(205) 00:19:49.880 fused_ordering(206) 00:19:49.880 fused_ordering(207) 00:19:49.880 fused_ordering(208) 00:19:49.880 fused_ordering(209) 00:19:49.880 fused_ordering(210) 00:19:49.880 fused_ordering(211) 00:19:49.880 fused_ordering(212) 00:19:49.880 fused_ordering(213) 00:19:49.880 fused_ordering(214) 00:19:49.880 fused_ordering(215) 00:19:49.880 fused_ordering(216) 00:19:49.880 fused_ordering(217) 00:19:49.880 fused_ordering(218) 00:19:49.880 fused_ordering(219) 00:19:49.880 fused_ordering(220) 00:19:49.880 fused_ordering(221) 00:19:49.880 fused_ordering(222) 00:19:49.880 fused_ordering(223) 00:19:49.880 fused_ordering(224) 00:19:49.880 fused_ordering(225) 00:19:49.880 fused_ordering(226) 00:19:49.880 fused_ordering(227) 00:19:49.880 fused_ordering(228) 00:19:49.880 fused_ordering(229) 00:19:49.880 fused_ordering(230) 00:19:49.880 fused_ordering(231) 00:19:49.880 fused_ordering(232) 00:19:49.880 fused_ordering(233) 00:19:49.880 fused_ordering(234) 00:19:49.880 fused_ordering(235) 00:19:49.880 fused_ordering(236) 00:19:49.880 fused_ordering(237) 00:19:49.880 fused_ordering(238) 00:19:49.880 fused_ordering(239) 00:19:49.880 fused_ordering(240) 00:19:49.880 fused_ordering(241) 00:19:49.880 fused_ordering(242) 00:19:49.880 fused_ordering(243) 00:19:49.880 fused_ordering(244) 00:19:49.880 fused_ordering(245) 00:19:49.880 fused_ordering(246) 00:19:49.880 fused_ordering(247) 00:19:49.880 fused_ordering(248) 00:19:49.880 fused_ordering(249) 00:19:49.880 fused_ordering(250) 00:19:49.880 fused_ordering(251) 00:19:49.880 fused_ordering(252) 00:19:49.880 fused_ordering(253) 00:19:49.880 fused_ordering(254) 00:19:49.880 fused_ordering(255) 00:19:49.880 fused_ordering(256) 00:19:49.880 fused_ordering(257) 00:19:49.880 fused_ordering(258) 00:19:49.880 fused_ordering(259) 00:19:49.880 fused_ordering(260) 00:19:49.880 fused_ordering(261) 00:19:49.880 fused_ordering(262) 00:19:49.880 fused_ordering(263) 00:19:49.880 fused_ordering(264) 00:19:49.880 fused_ordering(265) 00:19:49.880 fused_ordering(266) 00:19:49.880 fused_ordering(267) 00:19:49.880 fused_ordering(268) 00:19:49.880 fused_ordering(269) 00:19:49.880 fused_ordering(270) 00:19:49.880 fused_ordering(271) 00:19:49.880 fused_ordering(272) 00:19:49.880 fused_ordering(273) 00:19:49.880 fused_ordering(274) 00:19:49.880 fused_ordering(275) 00:19:49.880 fused_ordering(276) 00:19:49.880 fused_ordering(277) 00:19:49.880 fused_ordering(278) 00:19:49.880 fused_ordering(279) 00:19:49.880 fused_ordering(280) 00:19:49.880 fused_ordering(281) 00:19:49.880 fused_ordering(282) 00:19:49.880 fused_ordering(283) 00:19:49.880 fused_ordering(284) 00:19:49.880 fused_ordering(285) 00:19:49.880 fused_ordering(286) 00:19:49.880 fused_ordering(287) 00:19:49.880 fused_ordering(288) 00:19:49.880 fused_ordering(289) 00:19:49.880 fused_ordering(290) 00:19:49.880 fused_ordering(291) 00:19:49.880 fused_ordering(292) 00:19:49.880 fused_ordering(293) 00:19:49.880 fused_ordering(294) 00:19:49.880 fused_ordering(295) 00:19:49.880 fused_ordering(296) 00:19:49.880 fused_ordering(297) 00:19:49.880 fused_ordering(298) 00:19:49.880 fused_ordering(299) 00:19:49.880 fused_ordering(300) 00:19:49.880 fused_ordering(301) 00:19:49.880 fused_ordering(302) 00:19:49.880 fused_ordering(303) 00:19:49.880 fused_ordering(304) 00:19:49.880 fused_ordering(305) 00:19:49.880 fused_ordering(306) 00:19:49.880 fused_ordering(307) 00:19:49.880 fused_ordering(308) 00:19:49.880 fused_ordering(309) 00:19:49.880 fused_ordering(310) 00:19:49.880 fused_ordering(311) 00:19:49.880 fused_ordering(312) 00:19:49.880 fused_ordering(313) 00:19:49.880 fused_ordering(314) 00:19:49.880 fused_ordering(315) 00:19:49.880 fused_ordering(316) 00:19:49.880 fused_ordering(317) 00:19:49.880 fused_ordering(318) 00:19:49.880 fused_ordering(319) 00:19:49.880 fused_ordering(320) 00:19:49.880 fused_ordering(321) 00:19:49.880 fused_ordering(322) 00:19:49.880 fused_ordering(323) 00:19:49.880 fused_ordering(324) 00:19:49.880 fused_ordering(325) 00:19:49.880 fused_ordering(326) 00:19:49.880 fused_ordering(327) 00:19:49.880 fused_ordering(328) 00:19:49.880 fused_ordering(329) 00:19:49.880 fused_ordering(330) 00:19:49.880 fused_ordering(331) 00:19:49.880 fused_ordering(332) 00:19:49.880 fused_ordering(333) 00:19:49.880 fused_ordering(334) 00:19:49.880 fused_ordering(335) 00:19:49.880 fused_ordering(336) 00:19:49.880 fused_ordering(337) 00:19:49.880 fused_ordering(338) 00:19:49.880 fused_ordering(339) 00:19:49.880 fused_ordering(340) 00:19:49.880 fused_ordering(341) 00:19:49.880 fused_ordering(342) 00:19:49.880 fused_ordering(343) 00:19:49.880 fused_ordering(344) 00:19:49.880 fused_ordering(345) 00:19:49.880 fused_ordering(346) 00:19:49.880 fused_ordering(347) 00:19:49.880 fused_ordering(348) 00:19:49.880 fused_ordering(349) 00:19:49.880 fused_ordering(350) 00:19:49.880 fused_ordering(351) 00:19:49.880 fused_ordering(352) 00:19:49.880 fused_ordering(353) 00:19:49.880 fused_ordering(354) 00:19:49.880 fused_ordering(355) 00:19:49.880 fused_ordering(356) 00:19:49.880 fused_ordering(357) 00:19:49.880 fused_ordering(358) 00:19:49.880 fused_ordering(359) 00:19:49.880 fused_ordering(360) 00:19:49.880 fused_ordering(361) 00:19:49.880 fused_ordering(362) 00:19:49.880 fused_ordering(363) 00:19:49.880 fused_ordering(364) 00:19:49.880 fused_ordering(365) 00:19:49.880 fused_ordering(366) 00:19:49.880 fused_ordering(367) 00:19:49.880 fused_ordering(368) 00:19:49.880 fused_ordering(369) 00:19:49.880 fused_ordering(370) 00:19:49.880 fused_ordering(371) 00:19:49.880 fused_ordering(372) 00:19:49.880 fused_ordering(373) 00:19:49.880 fused_ordering(374) 00:19:49.880 fused_ordering(375) 00:19:49.880 fused_ordering(376) 00:19:49.880 fused_ordering(377) 00:19:49.880 fused_ordering(378) 00:19:49.880 fused_ordering(379) 00:19:49.880 fused_ordering(380) 00:19:49.880 fused_ordering(381) 00:19:49.880 fused_ordering(382) 00:19:49.880 fused_ordering(383) 00:19:49.880 fused_ordering(384) 00:19:49.880 fused_ordering(385) 00:19:49.880 fused_ordering(386) 00:19:49.880 fused_ordering(387) 00:19:49.880 fused_ordering(388) 00:19:49.880 fused_ordering(389) 00:19:49.880 fused_ordering(390) 00:19:49.880 fused_ordering(391) 00:19:49.880 fused_ordering(392) 00:19:49.880 fused_ordering(393) 00:19:49.880 fused_ordering(394) 00:19:49.880 fused_ordering(395) 00:19:49.880 fused_ordering(396) 00:19:49.880 fused_ordering(397) 00:19:49.880 fused_ordering(398) 00:19:49.880 fused_ordering(399) 00:19:49.880 fused_ordering(400) 00:19:49.880 fused_ordering(401) 00:19:49.880 fused_ordering(402) 00:19:49.880 fused_ordering(403) 00:19:49.880 fused_ordering(404) 00:19:49.880 fused_ordering(405) 00:19:49.880 fused_ordering(406) 00:19:49.880 fused_ordering(407) 00:19:49.880 fused_ordering(408) 00:19:49.880 fused_ordering(409) 00:19:49.880 fused_ordering(410) 00:19:50.138 fused_ordering(411) 00:19:50.138 fused_ordering(412) 00:19:50.138 fused_ordering(413) 00:19:50.138 fused_ordering(414) 00:19:50.138 fused_ordering(415) 00:19:50.138 fused_ordering(416) 00:19:50.138 fused_ordering(417) 00:19:50.138 fused_ordering(418) 00:19:50.138 fused_ordering(419) 00:19:50.138 fused_ordering(420) 00:19:50.138 fused_ordering(421) 00:19:50.138 fused_ordering(422) 00:19:50.138 fused_ordering(423) 00:19:50.138 fused_ordering(424) 00:19:50.138 fused_ordering(425) 00:19:50.138 fused_ordering(426) 00:19:50.138 fused_ordering(427) 00:19:50.138 fused_ordering(428) 00:19:50.138 fused_ordering(429) 00:19:50.139 fused_ordering(430) 00:19:50.139 fused_ordering(431) 00:19:50.139 fused_ordering(432) 00:19:50.139 fused_ordering(433) 00:19:50.139 fused_ordering(434) 00:19:50.139 fused_ordering(435) 00:19:50.139 fused_ordering(436) 00:19:50.139 fused_ordering(437) 00:19:50.139 fused_ordering(438) 00:19:50.139 fused_ordering(439) 00:19:50.139 fused_ordering(440) 00:19:50.139 fused_ordering(441) 00:19:50.139 fused_ordering(442) 00:19:50.139 fused_ordering(443) 00:19:50.139 fused_ordering(444) 00:19:50.139 fused_ordering(445) 00:19:50.139 fused_ordering(446) 00:19:50.139 fused_ordering(447) 00:19:50.139 fused_ordering(448) 00:19:50.139 fused_ordering(449) 00:19:50.139 fused_ordering(450) 00:19:50.139 fused_ordering(451) 00:19:50.139 fused_ordering(452) 00:19:50.139 fused_ordering(453) 00:19:50.139 fused_ordering(454) 00:19:50.139 fused_ordering(455) 00:19:50.139 fused_ordering(456) 00:19:50.139 fused_ordering(457) 00:19:50.139 fused_ordering(458) 00:19:50.139 fused_ordering(459) 00:19:50.139 fused_ordering(460) 00:19:50.139 fused_ordering(461) 00:19:50.139 fused_ordering(462) 00:19:50.139 fused_ordering(463) 00:19:50.139 fused_ordering(464) 00:19:50.139 fused_ordering(465) 00:19:50.139 fused_ordering(466) 00:19:50.139 fused_ordering(467) 00:19:50.139 fused_ordering(468) 00:19:50.139 fused_ordering(469) 00:19:50.139 fused_ordering(470) 00:19:50.139 fused_ordering(471) 00:19:50.139 fused_ordering(472) 00:19:50.139 fused_ordering(473) 00:19:50.139 fused_ordering(474) 00:19:50.139 fused_ordering(475) 00:19:50.139 fused_ordering(476) 00:19:50.139 fused_ordering(477) 00:19:50.139 fused_ordering(478) 00:19:50.139 fused_ordering(479) 00:19:50.139 fused_ordering(480) 00:19:50.139 fused_ordering(481) 00:19:50.139 fused_ordering(482) 00:19:50.139 fused_ordering(483) 00:19:50.139 fused_ordering(484) 00:19:50.139 fused_ordering(485) 00:19:50.139 fused_ordering(486) 00:19:50.139 fused_ordering(487) 00:19:50.139 fused_ordering(488) 00:19:50.139 fused_ordering(489) 00:19:50.139 fused_ordering(490) 00:19:50.139 fused_ordering(491) 00:19:50.139 fused_ordering(492) 00:19:50.139 fused_ordering(493) 00:19:50.139 fused_ordering(494) 00:19:50.139 fused_ordering(495) 00:19:50.139 fused_ordering(496) 00:19:50.139 fused_ordering(497) 00:19:50.139 fused_ordering(498) 00:19:50.139 fused_ordering(499) 00:19:50.139 fused_ordering(500) 00:19:50.139 fused_ordering(501) 00:19:50.139 fused_ordering(502) 00:19:50.139 fused_ordering(503) 00:19:50.139 fused_ordering(504) 00:19:50.139 fused_ordering(505) 00:19:50.139 fused_ordering(506) 00:19:50.139 fused_ordering(507) 00:19:50.139 fused_ordering(508) 00:19:50.139 fused_ordering(509) 00:19:50.139 fused_ordering(510) 00:19:50.139 fused_ordering(511) 00:19:50.139 fused_ordering(512) 00:19:50.139 fused_ordering(513) 00:19:50.139 fused_ordering(514) 00:19:50.139 fused_ordering(515) 00:19:50.139 fused_ordering(516) 00:19:50.139 fused_ordering(517) 00:19:50.139 fused_ordering(518) 00:19:50.139 fused_ordering(519) 00:19:50.139 fused_ordering(520) 00:19:50.139 fused_ordering(521) 00:19:50.139 fused_ordering(522) 00:19:50.139 fused_ordering(523) 00:19:50.139 fused_ordering(524) 00:19:50.139 fused_ordering(525) 00:19:50.139 fused_ordering(526) 00:19:50.139 fused_ordering(527) 00:19:50.139 fused_ordering(528) 00:19:50.139 fused_ordering(529) 00:19:50.139 fused_ordering(530) 00:19:50.139 fused_ordering(531) 00:19:50.139 fused_ordering(532) 00:19:50.139 fused_ordering(533) 00:19:50.139 fused_ordering(534) 00:19:50.139 fused_ordering(535) 00:19:50.139 fused_ordering(536) 00:19:50.139 fused_ordering(537) 00:19:50.139 fused_ordering(538) 00:19:50.139 fused_ordering(539) 00:19:50.139 fused_ordering(540) 00:19:50.139 fused_ordering(541) 00:19:50.139 fused_ordering(542) 00:19:50.139 fused_ordering(543) 00:19:50.139 fused_ordering(544) 00:19:50.139 fused_ordering(545) 00:19:50.139 fused_ordering(546) 00:19:50.139 fused_ordering(547) 00:19:50.139 fused_ordering(548) 00:19:50.139 fused_ordering(549) 00:19:50.139 fused_ordering(550) 00:19:50.139 fused_ordering(551) 00:19:50.139 fused_ordering(552) 00:19:50.139 fused_ordering(553) 00:19:50.139 fused_ordering(554) 00:19:50.139 fused_ordering(555) 00:19:50.139 fused_ordering(556) 00:19:50.139 fused_ordering(557) 00:19:50.139 fused_ordering(558) 00:19:50.139 fused_ordering(559) 00:19:50.139 fused_ordering(560) 00:19:50.139 fused_ordering(561) 00:19:50.139 fused_ordering(562) 00:19:50.139 fused_ordering(563) 00:19:50.139 fused_ordering(564) 00:19:50.139 fused_ordering(565) 00:19:50.139 fused_ordering(566) 00:19:50.139 fused_ordering(567) 00:19:50.139 fused_ordering(568) 00:19:50.139 fused_ordering(569) 00:19:50.139 fused_ordering(570) 00:19:50.139 fused_ordering(571) 00:19:50.139 fused_ordering(572) 00:19:50.139 fused_ordering(573) 00:19:50.139 fused_ordering(574) 00:19:50.139 fused_ordering(575) 00:19:50.139 fused_ordering(576) 00:19:50.139 fused_ordering(577) 00:19:50.139 fused_ordering(578) 00:19:50.139 fused_ordering(579) 00:19:50.139 fused_ordering(580) 00:19:50.139 fused_ordering(581) 00:19:50.139 fused_ordering(582) 00:19:50.139 fused_ordering(583) 00:19:50.139 fused_ordering(584) 00:19:50.139 fused_ordering(585) 00:19:50.139 fused_ordering(586) 00:19:50.139 fused_ordering(587) 00:19:50.139 fused_ordering(588) 00:19:50.139 fused_ordering(589) 00:19:50.139 fused_ordering(590) 00:19:50.139 fused_ordering(591) 00:19:50.139 fused_ordering(592) 00:19:50.139 fused_ordering(593) 00:19:50.139 fused_ordering(594) 00:19:50.139 fused_ordering(595) 00:19:50.139 fused_ordering(596) 00:19:50.139 fused_ordering(597) 00:19:50.139 fused_ordering(598) 00:19:50.139 fused_ordering(599) 00:19:50.139 fused_ordering(600) 00:19:50.139 fused_ordering(601) 00:19:50.139 fused_ordering(602) 00:19:50.139 fused_ordering(603) 00:19:50.139 fused_ordering(604) 00:19:50.139 fused_ordering(605) 00:19:50.139 fused_ordering(606) 00:19:50.139 fused_ordering(607) 00:19:50.139 fused_ordering(608) 00:19:50.139 fused_ordering(609) 00:19:50.139 fused_ordering(610) 00:19:50.139 fused_ordering(611) 00:19:50.139 fused_ordering(612) 00:19:50.139 fused_ordering(613) 00:19:50.139 fused_ordering(614) 00:19:50.139 fused_ordering(615) 00:19:50.706 fused_ordering(616) 00:19:50.706 fused_ordering(617) 00:19:50.706 fused_ordering(618) 00:19:50.706 fused_ordering(619) 00:19:50.706 fused_ordering(620) 00:19:50.706 fused_ordering(621) 00:19:50.706 fused_ordering(622) 00:19:50.706 fused_ordering(623) 00:19:50.706 fused_ordering(624) 00:19:50.706 fused_ordering(625) 00:19:50.706 fused_ordering(626) 00:19:50.706 fused_ordering(627) 00:19:50.706 fused_ordering(628) 00:19:50.706 fused_ordering(629) 00:19:50.706 fused_ordering(630) 00:19:50.706 fused_ordering(631) 00:19:50.706 fused_ordering(632) 00:19:50.706 fused_ordering(633) 00:19:50.706 fused_ordering(634) 00:19:50.706 fused_ordering(635) 00:19:50.706 fused_ordering(636) 00:19:50.706 fused_ordering(637) 00:19:50.706 fused_ordering(638) 00:19:50.706 fused_ordering(639) 00:19:50.706 fused_ordering(640) 00:19:50.706 fused_ordering(641) 00:19:50.706 fused_ordering(642) 00:19:50.706 fused_ordering(643) 00:19:50.706 fused_ordering(644) 00:19:50.706 fused_ordering(645) 00:19:50.706 fused_ordering(646) 00:19:50.706 fused_ordering(647) 00:19:50.706 fused_ordering(648) 00:19:50.706 fused_ordering(649) 00:19:50.706 fused_ordering(650) 00:19:50.706 fused_ordering(651) 00:19:50.706 fused_ordering(652) 00:19:50.706 fused_ordering(653) 00:19:50.706 fused_ordering(654) 00:19:50.706 fused_ordering(655) 00:19:50.706 fused_ordering(656) 00:19:50.706 fused_ordering(657) 00:19:50.706 fused_ordering(658) 00:19:50.706 fused_ordering(659) 00:19:50.706 fused_ordering(660) 00:19:50.707 fused_ordering(661) 00:19:50.707 fused_ordering(662) 00:19:50.707 fused_ordering(663) 00:19:50.707 fused_ordering(664) 00:19:50.707 fused_ordering(665) 00:19:50.707 fused_ordering(666) 00:19:50.707 fused_ordering(667) 00:19:50.707 fused_ordering(668) 00:19:50.707 fused_ordering(669) 00:19:50.707 fused_ordering(670) 00:19:50.707 fused_ordering(671) 00:19:50.707 fused_ordering(672) 00:19:50.707 fused_ordering(673) 00:19:50.707 fused_ordering(674) 00:19:50.707 fused_ordering(675) 00:19:50.707 fused_ordering(676) 00:19:50.707 fused_ordering(677) 00:19:50.707 fused_ordering(678) 00:19:50.707 fused_ordering(679) 00:19:50.707 fused_ordering(680) 00:19:50.707 fused_ordering(681) 00:19:50.707 fused_ordering(682) 00:19:50.707 fused_ordering(683) 00:19:50.707 fused_ordering(684) 00:19:50.707 fused_ordering(685) 00:19:50.707 fused_ordering(686) 00:19:50.707 fused_ordering(687) 00:19:50.707 fused_ordering(688) 00:19:50.707 fused_ordering(689) 00:19:50.707 fused_ordering(690) 00:19:50.707 fused_ordering(691) 00:19:50.707 fused_ordering(692) 00:19:50.707 fused_ordering(693) 00:19:50.707 fused_ordering(694) 00:19:50.707 fused_ordering(695) 00:19:50.707 fused_ordering(696) 00:19:50.707 fused_ordering(697) 00:19:50.707 fused_ordering(698) 00:19:50.707 fused_ordering(699) 00:19:50.707 fused_ordering(700) 00:19:50.707 fused_ordering(701) 00:19:50.707 fused_ordering(702) 00:19:50.707 fused_ordering(703) 00:19:50.707 fused_ordering(704) 00:19:50.707 fused_ordering(705) 00:19:50.707 fused_ordering(706) 00:19:50.707 fused_ordering(707) 00:19:50.707 fused_ordering(708) 00:19:50.707 fused_ordering(709) 00:19:50.707 fused_ordering(710) 00:19:50.707 fused_ordering(711) 00:19:50.707 fused_ordering(712) 00:19:50.707 fused_ordering(713) 00:19:50.707 fused_ordering(714) 00:19:50.707 fused_ordering(715) 00:19:50.707 fused_ordering(716) 00:19:50.707 fused_ordering(717) 00:19:50.707 fused_ordering(718) 00:19:50.707 fused_ordering(719) 00:19:50.707 fused_ordering(720) 00:19:50.707 fused_ordering(721) 00:19:50.707 fused_ordering(722) 00:19:50.707 fused_ordering(723) 00:19:50.707 fused_ordering(724) 00:19:50.707 fused_ordering(725) 00:19:50.707 fused_ordering(726) 00:19:50.707 fused_ordering(727) 00:19:50.707 fused_ordering(728) 00:19:50.707 fused_ordering(729) 00:19:50.707 fused_ordering(730) 00:19:50.707 fused_ordering(731) 00:19:50.707 fused_ordering(732) 00:19:50.707 fused_ordering(733) 00:19:50.707 fused_ordering(734) 00:19:50.707 fused_ordering(735) 00:19:50.707 fused_ordering(736) 00:19:50.707 fused_ordering(737) 00:19:50.707 fused_ordering(738) 00:19:50.707 fused_ordering(739) 00:19:50.707 fused_ordering(740) 00:19:50.707 fused_ordering(741) 00:19:50.707 fused_ordering(742) 00:19:50.707 fused_ordering(743) 00:19:50.707 fused_ordering(744) 00:19:50.707 fused_ordering(745) 00:19:50.707 fused_ordering(746) 00:19:50.707 fused_ordering(747) 00:19:50.707 fused_ordering(748) 00:19:50.707 fused_ordering(749) 00:19:50.707 fused_ordering(750) 00:19:50.707 fused_ordering(751) 00:19:50.707 fused_ordering(752) 00:19:50.707 fused_ordering(753) 00:19:50.707 fused_ordering(754) 00:19:50.707 fused_ordering(755) 00:19:50.707 fused_ordering(756) 00:19:50.707 fused_ordering(757) 00:19:50.707 fused_ordering(758) 00:19:50.707 fused_ordering(759) 00:19:50.707 fused_ordering(760) 00:19:50.707 fused_ordering(761) 00:19:50.707 fused_ordering(762) 00:19:50.707 fused_ordering(763) 00:19:50.707 fused_ordering(764) 00:19:50.707 fused_ordering(765) 00:19:50.707 fused_ordering(766) 00:19:50.707 fused_ordering(767) 00:19:50.707 fused_ordering(768) 00:19:50.707 fused_ordering(769) 00:19:50.707 fused_ordering(770) 00:19:50.707 fused_ordering(771) 00:19:50.707 fused_ordering(772) 00:19:50.707 fused_ordering(773) 00:19:50.707 fused_ordering(774) 00:19:50.707 fused_ordering(775) 00:19:50.707 fused_ordering(776) 00:19:50.707 fused_ordering(777) 00:19:50.707 fused_ordering(778) 00:19:50.707 fused_ordering(779) 00:19:50.707 fused_ordering(780) 00:19:50.707 fused_ordering(781) 00:19:50.707 fused_ordering(782) 00:19:50.707 fused_ordering(783) 00:19:50.707 fused_ordering(784) 00:19:50.707 fused_ordering(785) 00:19:50.707 fused_ordering(786) 00:19:50.707 fused_ordering(787) 00:19:50.707 fused_ordering(788) 00:19:50.707 fused_ordering(789) 00:19:50.707 fused_ordering(790) 00:19:50.707 fused_ordering(791) 00:19:50.707 fused_ordering(792) 00:19:50.707 fused_ordering(793) 00:19:50.707 fused_ordering(794) 00:19:50.707 fused_ordering(795) 00:19:50.707 fused_ordering(796) 00:19:50.707 fused_ordering(797) 00:19:50.707 fused_ordering(798) 00:19:50.707 fused_ordering(799) 00:19:50.707 fused_ordering(800) 00:19:50.707 fused_ordering(801) 00:19:50.707 fused_ordering(802) 00:19:50.707 fused_ordering(803) 00:19:50.707 fused_ordering(804) 00:19:50.707 fused_ordering(805) 00:19:50.707 fused_ordering(806) 00:19:50.707 fused_ordering(807) 00:19:50.707 fused_ordering(808) 00:19:50.707 fused_ordering(809) 00:19:50.707 fused_ordering(810) 00:19:50.707 fused_ordering(811) 00:19:50.707 fused_ordering(812) 00:19:50.707 fused_ordering(813) 00:19:50.707 fused_ordering(814) 00:19:50.707 fused_ordering(815) 00:19:50.707 fused_ordering(816) 00:19:50.707 fused_ordering(817) 00:19:50.707 fused_ordering(818) 00:19:50.707 fused_ordering(819) 00:19:50.707 fused_ordering(820) 00:19:51.016 fused_ordering(821) 00:19:51.016 fused_ordering(822) 00:19:51.016 fused_ordering(823) 00:19:51.016 fused_ordering(824) 00:19:51.016 fused_ordering(825) 00:19:51.016 fused_ordering(826) 00:19:51.016 fused_ordering(827) 00:19:51.016 fused_ordering(828) 00:19:51.016 fused_ordering(829) 00:19:51.016 fused_ordering(830) 00:19:51.016 fused_ordering(831) 00:19:51.016 fused_ordering(832) 00:19:51.016 fused_ordering(833) 00:19:51.016 fused_ordering(834) 00:19:51.016 fused_ordering(835) 00:19:51.016 fused_ordering(836) 00:19:51.016 fused_ordering(837) 00:19:51.016 fused_ordering(838) 00:19:51.016 fused_ordering(839) 00:19:51.016 fused_ordering(840) 00:19:51.016 fused_ordering(841) 00:19:51.016 fused_ordering(842) 00:19:51.016 fused_ordering(843) 00:19:51.016 fused_ordering(844) 00:19:51.016 fused_ordering(845) 00:19:51.016 fused_ordering(846) 00:19:51.016 fused_ordering(847) 00:19:51.016 fused_ordering(848) 00:19:51.016 fused_ordering(849) 00:19:51.016 fused_ordering(850) 00:19:51.016 fused_ordering(851) 00:19:51.016 fused_ordering(852) 00:19:51.016 fused_ordering(853) 00:19:51.016 fused_ordering(854) 00:19:51.016 fused_ordering(855) 00:19:51.016 fused_ordering(856) 00:19:51.016 fused_ordering(857) 00:19:51.016 fused_ordering(858) 00:19:51.016 fused_ordering(859) 00:19:51.016 fused_ordering(860) 00:19:51.016 fused_ordering(861) 00:19:51.016 fused_ordering(862) 00:19:51.016 fused_ordering(863) 00:19:51.016 fused_ordering(864) 00:19:51.016 fused_ordering(865) 00:19:51.016 fused_ordering(866) 00:19:51.016 fused_ordering(867) 00:19:51.016 fused_ordering(868) 00:19:51.016 fused_ordering(869) 00:19:51.016 fused_ordering(870) 00:19:51.016 fused_ordering(871) 00:19:51.016 fused_ordering(872) 00:19:51.016 fused_ordering(873) 00:19:51.016 fused_ordering(874) 00:19:51.016 fused_ordering(875) 00:19:51.016 fused_ordering(876) 00:19:51.016 fused_ordering(877) 00:19:51.016 fused_ordering(878) 00:19:51.016 fused_ordering(879) 00:19:51.016 fused_ordering(880) 00:19:51.016 fused_ordering(881) 00:19:51.016 fused_ordering(882) 00:19:51.016 fused_ordering(883) 00:19:51.016 fused_ordering(884) 00:19:51.016 fused_ordering(885) 00:19:51.016 fused_ordering(886) 00:19:51.016 fused_ordering(887) 00:19:51.016 fused_ordering(888) 00:19:51.016 fused_ordering(889) 00:19:51.016 fused_ordering(890) 00:19:51.016 fused_ordering(891) 00:19:51.016 fused_ordering(892) 00:19:51.016 fused_ordering(893) 00:19:51.016 fused_ordering(894) 00:19:51.016 fused_ordering(895) 00:19:51.016 fused_ordering(896) 00:19:51.016 fused_ordering(897) 00:19:51.016 fused_ordering(898) 00:19:51.016 fused_ordering(899) 00:19:51.016 fused_ordering(900) 00:19:51.016 fused_ordering(901) 00:19:51.016 fused_ordering(902) 00:19:51.016 fused_ordering(903) 00:19:51.016 fused_ordering(904) 00:19:51.016 fused_ordering(905) 00:19:51.016 fused_ordering(906) 00:19:51.016 fused_ordering(907) 00:19:51.016 fused_ordering(908) 00:19:51.016 fused_ordering(909) 00:19:51.016 fused_ordering(910) 00:19:51.016 fused_ordering(911) 00:19:51.016 fused_ordering(912) 00:19:51.016 fused_ordering(913) 00:19:51.016 fused_ordering(914) 00:19:51.016 fused_ordering(915) 00:19:51.016 fused_ordering(916) 00:19:51.016 fused_ordering(917) 00:19:51.016 fused_ordering(918) 00:19:51.016 fused_ordering(919) 00:19:51.016 fused_ordering(920) 00:19:51.016 fused_ordering(921) 00:19:51.016 fused_ordering(922) 00:19:51.016 fused_ordering(923) 00:19:51.016 fused_ordering(924) 00:19:51.016 fused_ordering(925) 00:19:51.016 fused_ordering(926) 00:19:51.016 fused_ordering(927) 00:19:51.016 fused_ordering(928) 00:19:51.016 fused_ordering(929) 00:19:51.016 fused_ordering(930) 00:19:51.016 fused_ordering(931) 00:19:51.016 fused_ordering(932) 00:19:51.016 fused_ordering(933) 00:19:51.016 fused_ordering(934) 00:19:51.016 fused_ordering(935) 00:19:51.016 fused_ordering(936) 00:19:51.016 fused_ordering(937) 00:19:51.016 fused_ordering(938) 00:19:51.016 fused_ordering(939) 00:19:51.016 fused_ordering(940) 00:19:51.016 fused_ordering(941) 00:19:51.016 fused_ordering(942) 00:19:51.016 fused_ordering(943) 00:19:51.016 fused_ordering(944) 00:19:51.016 fused_ordering(945) 00:19:51.016 fused_ordering(946) 00:19:51.016 fused_ordering(947) 00:19:51.016 fused_ordering(948) 00:19:51.016 fused_ordering(949) 00:19:51.016 fused_ordering(950) 00:19:51.016 fused_ordering(951) 00:19:51.016 fused_ordering(952) 00:19:51.016 fused_ordering(953) 00:19:51.016 fused_ordering(954) 00:19:51.016 fused_ordering(955) 00:19:51.016 fused_ordering(956) 00:19:51.016 fused_ordering(957) 00:19:51.016 fused_ordering(958) 00:19:51.016 fused_ordering(959) 00:19:51.016 fused_ordering(960) 00:19:51.016 fused_ordering(961) 00:19:51.016 fused_ordering(962) 00:19:51.016 fused_ordering(963) 00:19:51.016 fused_ordering(964) 00:19:51.016 fused_ordering(965) 00:19:51.016 fused_ordering(966) 00:19:51.016 fused_ordering(967) 00:19:51.016 fused_ordering(968) 00:19:51.016 fused_ordering(969) 00:19:51.016 fused_ordering(970) 00:19:51.016 fused_ordering(971) 00:19:51.016 fused_ordering(972) 00:19:51.016 fused_ordering(973) 00:19:51.016 fused_ordering(974) 00:19:51.016 fused_ordering(975) 00:19:51.016 fused_ordering(976) 00:19:51.017 fused_ordering(977) 00:19:51.017 fused_ordering(978) 00:19:51.017 fused_ordering(979) 00:19:51.017 fused_ordering(980) 00:19:51.017 fused_ordering(981) 00:19:51.017 fused_ordering(982) 00:19:51.017 fused_ordering(983) 00:19:51.017 fused_ordering(984) 00:19:51.017 fused_ordering(985) 00:19:51.017 fused_ordering(986) 00:19:51.017 fused_ordering(987) 00:19:51.017 fused_ordering(988) 00:19:51.017 fused_ordering(989) 00:19:51.017 fused_ordering(990) 00:19:51.017 fused_ordering(991) 00:19:51.017 fused_ordering(992) 00:19:51.017 fused_ordering(993) 00:19:51.017 fused_ordering(994) 00:19:51.017 fused_ordering(995) 00:19:51.017 fused_ordering(996) 00:19:51.017 fused_ordering(997) 00:19:51.017 fused_ordering(998) 00:19:51.017 fused_ordering(999) 00:19:51.017 fused_ordering(1000) 00:19:51.017 fused_ordering(1001) 00:19:51.017 fused_ordering(1002) 00:19:51.017 fused_ordering(1003) 00:19:51.017 fused_ordering(1004) 00:19:51.017 fused_ordering(1005) 00:19:51.017 fused_ordering(1006) 00:19:51.017 fused_ordering(1007) 00:19:51.017 fused_ordering(1008) 00:19:51.017 fused_ordering(1009) 00:19:51.017 fused_ordering(1010) 00:19:51.017 fused_ordering(1011) 00:19:51.017 fused_ordering(1012) 00:19:51.017 fused_ordering(1013) 00:19:51.017 fused_ordering(1014) 00:19:51.017 fused_ordering(1015) 00:19:51.017 fused_ordering(1016) 00:19:51.017 fused_ordering(1017) 00:19:51.017 fused_ordering(1018) 00:19:51.017 fused_ordering(1019) 00:19:51.017 fused_ordering(1020) 00:19:51.017 fused_ordering(1021) 00:19:51.017 fused_ordering(1022) 00:19:51.017 fused_ordering(1023) 00:19:51.017 13:30:08 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:51.017 13:30:08 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:51.017 13:30:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:51.017 13:30:08 -- nvmf/common.sh@117 -- # sync 00:19:51.275 13:30:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:51.275 13:30:08 -- nvmf/common.sh@120 -- # set +e 00:19:51.275 13:30:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.275 13:30:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:51.275 rmmod nvme_tcp 00:19:51.275 rmmod nvme_fabrics 00:19:51.275 rmmod nvme_keyring 00:19:51.275 13:30:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.275 13:30:08 -- nvmf/common.sh@124 -- # set -e 00:19:51.275 13:30:08 -- nvmf/common.sh@125 -- # return 0 00:19:51.275 13:30:08 -- nvmf/common.sh@478 -- # '[' -n 70317 ']' 00:19:51.275 13:30:08 -- nvmf/common.sh@479 -- # killprocess 70317 00:19:51.275 13:30:08 -- common/autotest_common.sh@936 -- # '[' -z 70317 ']' 00:19:51.275 13:30:08 -- common/autotest_common.sh@940 -- # kill -0 70317 00:19:51.275 13:30:08 -- common/autotest_common.sh@941 -- # uname 00:19:51.275 13:30:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:51.275 13:30:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70317 00:19:51.275 13:30:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:51.275 killing process with pid 70317 00:19:51.275 13:30:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:51.275 13:30:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70317' 00:19:51.275 13:30:08 -- common/autotest_common.sh@955 -- # kill 70317 00:19:51.275 13:30:08 -- common/autotest_common.sh@960 -- # wait 70317 00:19:51.534 13:30:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:51.534 13:30:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:51.534 13:30:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:51.534 13:30:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:51.534 13:30:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:51.534 13:30:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.534 13:30:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.534 13:30:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.534 13:30:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:51.534 00:19:51.534 real 0m4.226s 00:19:51.534 user 0m5.076s 00:19:51.534 sys 0m1.395s 00:19:51.534 13:30:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:51.534 13:30:08 -- common/autotest_common.sh@10 -- # set +x 00:19:51.534 ************************************ 00:19:51.534 END TEST nvmf_fused_ordering 00:19:51.534 ************************************ 00:19:51.534 13:30:08 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:19:51.534 13:30:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:51.534 13:30:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:51.534 13:30:08 -- common/autotest_common.sh@10 -- # set +x 00:19:51.840 ************************************ 00:19:51.840 START TEST nvmf_delete_subsystem 00:19:51.840 ************************************ 00:19:51.840 13:30:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:19:51.840 * Looking for test storage... 00:19:51.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:51.840 13:30:09 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:51.840 13:30:09 -- nvmf/common.sh@7 -- # uname -s 00:19:51.840 13:30:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:51.840 13:30:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:51.840 13:30:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:51.840 13:30:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:51.840 13:30:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:51.840 13:30:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:51.840 13:30:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:51.840 13:30:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:51.840 13:30:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:51.840 13:30:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:51.840 13:30:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:19:51.840 13:30:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:19:51.840 13:30:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:51.840 13:30:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:51.840 13:30:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:51.840 13:30:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:51.841 13:30:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:51.841 13:30:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:51.841 13:30:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:51.841 13:30:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:51.841 13:30:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.841 13:30:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.841 13:30:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.841 13:30:09 -- paths/export.sh@5 -- # export PATH 00:19:51.841 13:30:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:51.841 13:30:09 -- nvmf/common.sh@47 -- # : 0 00:19:51.841 13:30:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:51.841 13:30:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:51.841 13:30:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:51.841 13:30:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:51.841 13:30:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:51.841 13:30:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:51.841 13:30:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:51.841 13:30:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:51.841 13:30:09 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:19:51.841 13:30:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:51.841 13:30:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:51.841 13:30:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:51.841 13:30:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:51.841 13:30:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:51.841 13:30:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:51.841 13:30:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:51.841 13:30:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:51.841 13:30:09 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:51.841 13:30:09 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:51.841 13:30:09 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:51.841 13:30:09 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:51.841 13:30:09 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:51.841 13:30:09 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:51.841 13:30:09 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.841 13:30:09 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.841 13:30:09 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:51.841 13:30:09 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:51.841 13:30:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:51.841 13:30:09 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:51.841 13:30:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:51.841 13:30:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.841 13:30:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:51.841 13:30:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:51.841 13:30:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:51.841 13:30:09 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:51.841 13:30:09 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:51.841 13:30:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:51.841 Cannot find device "nvmf_tgt_br" 00:19:51.841 13:30:09 -- nvmf/common.sh@155 -- # true 00:19:51.841 13:30:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:51.841 Cannot find device "nvmf_tgt_br2" 00:19:51.841 13:30:09 -- nvmf/common.sh@156 -- # true 00:19:51.841 13:30:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:51.841 13:30:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:51.841 Cannot find device "nvmf_tgt_br" 00:19:51.841 13:30:09 -- nvmf/common.sh@158 -- # true 00:19:51.841 13:30:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:51.841 Cannot find device "nvmf_tgt_br2" 00:19:51.841 13:30:09 -- nvmf/common.sh@159 -- # true 00:19:51.841 13:30:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:51.841 13:30:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:51.841 13:30:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:51.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.841 13:30:09 -- nvmf/common.sh@162 -- # true 00:19:51.841 13:30:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:51.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:51.841 13:30:09 -- nvmf/common.sh@163 -- # true 00:19:51.841 13:30:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:51.841 13:30:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:51.841 13:30:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:51.841 13:30:09 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:51.841 13:30:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:51.841 13:30:09 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:51.841 13:30:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:51.841 13:30:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:52.099 13:30:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:52.099 13:30:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:52.099 13:30:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:52.099 13:30:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:52.099 13:30:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:52.099 13:30:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:52.099 13:30:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:52.099 13:30:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:52.099 13:30:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:52.099 13:30:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:52.099 13:30:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:52.099 13:30:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:52.099 13:30:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:52.099 13:30:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:52.099 13:30:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:52.099 13:30:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:52.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:52.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:52.099 00:19:52.099 --- 10.0.0.2 ping statistics --- 00:19:52.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.099 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:52.099 13:30:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:52.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:52.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:19:52.099 00:19:52.099 --- 10.0.0.3 ping statistics --- 00:19:52.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.099 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:52.099 13:30:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:52.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:52.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:52.099 00:19:52.099 --- 10.0.0.1 ping statistics --- 00:19:52.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:52.099 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:52.099 13:30:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:52.099 13:30:09 -- nvmf/common.sh@422 -- # return 0 00:19:52.099 13:30:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.099 13:30:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:52.099 13:30:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:52.099 13:30:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:52.099 13:30:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:52.099 13:30:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:52.099 13:30:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:52.099 13:30:09 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:19:52.099 13:30:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.099 13:30:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.099 13:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:52.099 13:30:09 -- nvmf/common.sh@470 -- # nvmfpid=70580 00:19:52.099 13:30:09 -- nvmf/common.sh@471 -- # waitforlisten 70580 00:19:52.099 13:30:09 -- common/autotest_common.sh@817 -- # '[' -z 70580 ']' 00:19:52.099 13:30:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.099 13:30:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.099 13:30:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:52.099 13:30:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.099 13:30:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.099 13:30:09 -- common/autotest_common.sh@10 -- # set +x 00:19:52.099 [2024-04-26 13:30:09.499993] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:19:52.099 [2024-04-26 13:30:09.500108] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.357 [2024-04-26 13:30:09.639122] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:52.616 [2024-04-26 13:30:09.817674] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.616 [2024-04-26 13:30:09.817757] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.616 [2024-04-26 13:30:09.817769] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.616 [2024-04-26 13:30:09.817777] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.616 [2024-04-26 13:30:09.817785] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.616 [2024-04-26 13:30:09.817940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.616 [2024-04-26 13:30:09.818348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.182 13:30:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.182 13:30:10 -- common/autotest_common.sh@850 -- # return 0 00:19:53.182 13:30:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:53.182 13:30:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.182 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 13:30:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:53.440 13:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.440 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 [2024-04-26 13:30:10.645294] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.440 13:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:53.440 13:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.440 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 13:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.440 13:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.440 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 [2024-04-26 13:30:10.661657] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.440 13:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:53.440 13:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.440 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 NULL1 00:19:53.440 13:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:53.440 13:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.440 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 Delay0 00:19:53.440 13:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:53.440 13:30:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.440 13:30:10 -- common/autotest_common.sh@10 -- # set +x 00:19:53.440 13:30:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@28 -- # perf_pid=70631 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@30 -- # sleep 2 00:19:53.440 13:30:10 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:19:53.440 [2024-04-26 13:30:10.868134] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:55.343 13:30:12 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.343 13:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.343 13:30:12 -- common/autotest_common.sh@10 -- # set +x 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 [2024-04-26 13:30:12.905134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194ace0 is same with the state(5) to be set 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Write completed with error (sct=0, sc=8) 00:19:55.603 starting I/O failed: -6 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.603 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 starting I/O failed: -6 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 starting I/O failed: -6 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 [2024-04-26 13:30:12.906833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f358800c3d0 is same with the state(5) to be set 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:55.604 Read completed with error (sct=0, sc=8) 00:19:55.604 Write completed with error (sct=0, sc=8) 00:19:56.540 [2024-04-26 13:30:13.882506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194a100 is same with the state(5) to be set 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 [2024-04-26 13:30:13.907172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194c200 is same with the state(5) to be set 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 [2024-04-26 13:30:13.907406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194afa0 is same with the state(5) to be set 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 [2024-04-26 13:30:13.908005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f358800c690 is same with the state(5) to be set 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Read completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 Write completed with error (sct=0, sc=8) 00:19:56.540 [2024-04-26 13:30:13.908452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f358800bf90 is same with the state(5) to be set 00:19:56.540 [2024-04-26 13:30:13.909676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194a100 (9): Bad file descriptor 00:19:56.540 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:56.540 13:30:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:56.540 13:30:13 -- target/delete_subsystem.sh@34 -- # delay=0 00:19:56.540 13:30:13 -- target/delete_subsystem.sh@35 -- # kill -0 70631 00:19:56.540 13:30:13 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:19:56.540 Initializing NVMe Controllers 00:19:56.540 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:56.540 Controller IO queue size 128, less than required. 00:19:56.540 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:56.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:19:56.540 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:19:56.540 Initialization complete. Launching workers. 00:19:56.540 ======================================================== 00:19:56.540 Latency(us) 00:19:56.541 Device Information : IOPS MiB/s Average min max 00:19:56.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.22 0.08 895069.79 459.86 1012224.74 00:19:56.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.29 0.08 916858.99 309.38 1015778.10 00:19:56.541 ======================================================== 00:19:56.541 Total : 330.51 0.16 905637.22 309.38 1015778.10 00:19:56.541 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@35 -- # kill -0 70631 00:19:57.108 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70631) - No such process 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@45 -- # NOT wait 70631 00:19:57.108 13:30:14 -- common/autotest_common.sh@638 -- # local es=0 00:19:57.108 13:30:14 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 70631 00:19:57.108 13:30:14 -- common/autotest_common.sh@626 -- # local arg=wait 00:19:57.108 13:30:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:57.108 13:30:14 -- common/autotest_common.sh@630 -- # type -t wait 00:19:57.108 13:30:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:57.108 13:30:14 -- common/autotest_common.sh@641 -- # wait 70631 00:19:57.108 13:30:14 -- common/autotest_common.sh@641 -- # es=1 00:19:57.108 13:30:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:57.108 13:30:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:57.108 13:30:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:57.108 13:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.108 13:30:14 -- common/autotest_common.sh@10 -- # set +x 00:19:57.108 13:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:57.108 13:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.108 13:30:14 -- common/autotest_common.sh@10 -- # set +x 00:19:57.108 [2024-04-26 13:30:14.433253] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.108 13:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:57.108 13:30:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:57.108 13:30:14 -- common/autotest_common.sh@10 -- # set +x 00:19:57.108 13:30:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@54 -- # perf_pid=70677 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@56 -- # delay=0 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:19:57.108 13:30:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:57.366 [2024-04-26 13:30:14.612341] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:57.624 13:30:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:57.624 13:30:14 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:19:57.624 13:30:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:58.192 13:30:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:58.192 13:30:15 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:19:58.192 13:30:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:58.759 13:30:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:58.759 13:30:15 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:19:58.759 13:30:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:59.018 13:30:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:59.018 13:30:16 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:19:59.018 13:30:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:59.585 13:30:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:59.585 13:30:16 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:19:59.585 13:30:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:00.153 13:30:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:00.153 13:30:17 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:20:00.153 13:30:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:20:00.435 Initializing NVMe Controllers 00:20:00.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:00.435 Controller IO queue size 128, less than required. 00:20:00.435 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:00.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:20:00.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:20:00.435 Initialization complete. Launching workers. 00:20:00.435 ======================================================== 00:20:00.435 Latency(us) 00:20:00.435 Device Information : IOPS MiB/s Average min max 00:20:00.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004514.67 1000134.14 1043394.27 00:20:00.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004948.02 1000192.52 1013986.68 00:20:00.435 ======================================================== 00:20:00.435 Total : 256.00 0.12 1004731.34 1000134.14 1043394.27 00:20:00.435 00:20:00.693 13:30:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:20:00.693 13:30:17 -- target/delete_subsystem.sh@57 -- # kill -0 70677 00:20:00.693 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70677) - No such process 00:20:00.693 13:30:17 -- target/delete_subsystem.sh@67 -- # wait 70677 00:20:00.693 13:30:17 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:00.693 13:30:17 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:20:00.693 13:30:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:00.693 13:30:17 -- nvmf/common.sh@117 -- # sync 00:20:00.693 13:30:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.693 13:30:18 -- nvmf/common.sh@120 -- # set +e 00:20:00.693 13:30:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.693 13:30:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.693 rmmod nvme_tcp 00:20:00.693 rmmod nvme_fabrics 00:20:00.693 rmmod nvme_keyring 00:20:00.693 13:30:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.693 13:30:18 -- nvmf/common.sh@124 -- # set -e 00:20:00.693 13:30:18 -- nvmf/common.sh@125 -- # return 0 00:20:00.693 13:30:18 -- nvmf/common.sh@478 -- # '[' -n 70580 ']' 00:20:00.693 13:30:18 -- nvmf/common.sh@479 -- # killprocess 70580 00:20:00.693 13:30:18 -- common/autotest_common.sh@936 -- # '[' -z 70580 ']' 00:20:00.693 13:30:18 -- common/autotest_common.sh@940 -- # kill -0 70580 00:20:00.693 13:30:18 -- common/autotest_common.sh@941 -- # uname 00:20:00.693 13:30:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:00.693 13:30:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70580 00:20:00.693 13:30:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:00.693 13:30:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:00.693 13:30:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70580' 00:20:00.693 killing process with pid 70580 00:20:00.693 13:30:18 -- common/autotest_common.sh@955 -- # kill 70580 00:20:00.693 13:30:18 -- common/autotest_common.sh@960 -- # wait 70580 00:20:00.952 13:30:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:00.952 13:30:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:00.952 13:30:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:00.952 13:30:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.952 13:30:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.952 13:30:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.952 13:30:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.952 13:30:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.211 13:30:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:01.211 00:20:01.211 real 0m9.413s 00:20:01.211 user 0m28.846s 00:20:01.211 sys 0m1.634s 00:20:01.211 13:30:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:01.211 ************************************ 00:20:01.211 END TEST nvmf_delete_subsystem 00:20:01.211 ************************************ 00:20:01.211 13:30:18 -- common/autotest_common.sh@10 -- # set +x 00:20:01.211 13:30:18 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:20:01.211 13:30:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:01.211 13:30:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:01.211 13:30:18 -- common/autotest_common.sh@10 -- # set +x 00:20:01.211 ************************************ 00:20:01.211 START TEST nvmf_ns_masking 00:20:01.211 ************************************ 00:20:01.211 13:30:18 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:20:01.211 * Looking for test storage... 00:20:01.211 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:01.211 13:30:18 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.211 13:30:18 -- nvmf/common.sh@7 -- # uname -s 00:20:01.211 13:30:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.211 13:30:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.211 13:30:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.212 13:30:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.212 13:30:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.212 13:30:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.212 13:30:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.212 13:30:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.212 13:30:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.212 13:30:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.212 13:30:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:01.212 13:30:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:01.212 13:30:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.212 13:30:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.212 13:30:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.212 13:30:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.212 13:30:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.212 13:30:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.212 13:30:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.212 13:30:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.212 13:30:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.212 13:30:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.212 13:30:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.212 13:30:18 -- paths/export.sh@5 -- # export PATH 00:20:01.212 13:30:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.212 13:30:18 -- nvmf/common.sh@47 -- # : 0 00:20:01.212 13:30:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.212 13:30:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.212 13:30:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.212 13:30:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.212 13:30:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.212 13:30:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.212 13:30:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.212 13:30:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.212 13:30:18 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:01.212 13:30:18 -- target/ns_masking.sh@11 -- # loops=5 00:20:01.212 13:30:18 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:20:01.212 13:30:18 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:20:01.212 13:30:18 -- target/ns_masking.sh@15 -- # uuidgen 00:20:01.212 13:30:18 -- target/ns_masking.sh@15 -- # HOSTID=65e748f9-130b-4f88-b43a-98019cf2f451 00:20:01.212 13:30:18 -- target/ns_masking.sh@44 -- # nvmftestinit 00:20:01.212 13:30:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:01.212 13:30:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.212 13:30:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:01.212 13:30:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:01.212 13:30:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:01.212 13:30:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.212 13:30:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.212 13:30:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.212 13:30:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:01.212 13:30:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:01.212 13:30:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:01.212 13:30:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:01.212 13:30:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:01.212 13:30:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:01.212 13:30:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.212 13:30:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.212 13:30:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:01.212 13:30:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:01.212 13:30:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:01.212 13:30:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:01.212 13:30:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:01.212 13:30:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.212 13:30:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:01.212 13:30:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:01.212 13:30:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:01.212 13:30:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:01.212 13:30:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:01.471 13:30:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:01.471 Cannot find device "nvmf_tgt_br" 00:20:01.471 13:30:18 -- nvmf/common.sh@155 -- # true 00:20:01.471 13:30:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.471 Cannot find device "nvmf_tgt_br2" 00:20:01.471 13:30:18 -- nvmf/common.sh@156 -- # true 00:20:01.471 13:30:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:01.471 13:30:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:01.471 Cannot find device "nvmf_tgt_br" 00:20:01.471 13:30:18 -- nvmf/common.sh@158 -- # true 00:20:01.471 13:30:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:01.471 Cannot find device "nvmf_tgt_br2" 00:20:01.471 13:30:18 -- nvmf/common.sh@159 -- # true 00:20:01.471 13:30:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:01.471 13:30:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:01.471 13:30:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.471 13:30:18 -- nvmf/common.sh@162 -- # true 00:20:01.471 13:30:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.471 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.471 13:30:18 -- nvmf/common.sh@163 -- # true 00:20:01.471 13:30:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:01.471 13:30:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:01.471 13:30:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:01.471 13:30:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:01.471 13:30:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:01.471 13:30:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:01.471 13:30:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:01.471 13:30:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:01.471 13:30:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:01.471 13:30:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:01.471 13:30:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:01.471 13:30:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:01.471 13:30:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:01.471 13:30:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:01.471 13:30:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:01.471 13:30:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:01.471 13:30:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:01.471 13:30:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:01.471 13:30:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:01.729 13:30:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:01.729 13:30:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:01.729 13:30:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:01.729 13:30:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.729 13:30:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:01.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:20:01.729 00:20:01.729 --- 10.0.0.2 ping statistics --- 00:20:01.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.729 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:01.729 13:30:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:01.729 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.729 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:20:01.729 00:20:01.729 --- 10.0.0.3 ping statistics --- 00:20:01.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.729 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:01.729 13:30:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:20:01.729 00:20:01.729 --- 10.0.0.1 ping statistics --- 00:20:01.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.729 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:01.729 13:30:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.729 13:30:18 -- nvmf/common.sh@422 -- # return 0 00:20:01.729 13:30:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:01.729 13:30:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.729 13:30:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:01.729 13:30:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:01.729 13:30:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.729 13:30:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:01.729 13:30:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:01.729 13:30:18 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:20:01.729 13:30:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:01.729 13:30:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:01.729 13:30:18 -- common/autotest_common.sh@10 -- # set +x 00:20:01.729 13:30:19 -- nvmf/common.sh@470 -- # nvmfpid=70912 00:20:01.729 13:30:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:01.729 13:30:19 -- nvmf/common.sh@471 -- # waitforlisten 70912 00:20:01.729 13:30:19 -- common/autotest_common.sh@817 -- # '[' -z 70912 ']' 00:20:01.729 13:30:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.729 13:30:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:01.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.729 13:30:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.729 13:30:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:01.729 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:20:01.729 [2024-04-26 13:30:19.060351] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:01.729 [2024-04-26 13:30:19.060465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.987 [2024-04-26 13:30:19.199637] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.987 [2024-04-26 13:30:19.323770] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.987 [2024-04-26 13:30:19.324087] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.987 [2024-04-26 13:30:19.324191] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.987 [2024-04-26 13:30:19.324273] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.987 [2024-04-26 13:30:19.324350] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.987 [2024-04-26 13:30:19.324577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.987 [2024-04-26 13:30:19.324664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.988 [2024-04-26 13:30:19.324937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.988 [2024-04-26 13:30:19.324941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.922 13:30:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:02.922 13:30:20 -- common/autotest_common.sh@850 -- # return 0 00:20:02.922 13:30:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:02.922 13:30:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:02.922 13:30:20 -- common/autotest_common.sh@10 -- # set +x 00:20:02.922 13:30:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.922 13:30:20 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:02.922 [2024-04-26 13:30:20.351041] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.180 13:30:20 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:20:03.180 13:30:20 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:20:03.180 13:30:20 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:03.438 Malloc1 00:20:03.438 13:30:20 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:03.696 Malloc2 00:20:03.696 13:30:21 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:03.955 13:30:21 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:04.213 13:30:21 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.472 [2024-04-26 13:30:21.800937] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.472 13:30:21 -- target/ns_masking.sh@61 -- # connect 00:20:04.472 13:30:21 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65e748f9-130b-4f88-b43a-98019cf2f451 -a 10.0.0.2 -s 4420 -i 4 00:20:04.730 13:30:21 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:20:04.730 13:30:21 -- common/autotest_common.sh@1184 -- # local i=0 00:20:04.730 13:30:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:04.730 13:30:21 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:04.730 13:30:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:06.631 13:30:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:06.631 13:30:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:06.631 13:30:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:06.631 13:30:23 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:06.631 13:30:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:06.631 13:30:23 -- common/autotest_common.sh@1194 -- # return 0 00:20:06.631 13:30:23 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:06.631 13:30:23 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:06.631 13:30:24 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:06.631 13:30:24 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:06.631 13:30:24 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:20:06.631 13:30:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:06.631 13:30:24 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:06.631 [ 0]:0x1 00:20:06.631 13:30:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:06.631 13:30:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:06.631 13:30:24 -- target/ns_masking.sh@40 -- # nguid=5d42e0f9b5bc4e5984dfdf11546232e6 00:20:06.631 13:30:24 -- target/ns_masking.sh@41 -- # [[ 5d42e0f9b5bc4e5984dfdf11546232e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:06.631 13:30:24 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:07.197 13:30:24 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:20:07.197 13:30:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:07.197 13:30:24 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:07.197 [ 0]:0x1 00:20:07.197 13:30:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:07.197 13:30:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:07.197 13:30:24 -- target/ns_masking.sh@40 -- # nguid=5d42e0f9b5bc4e5984dfdf11546232e6 00:20:07.197 13:30:24 -- target/ns_masking.sh@41 -- # [[ 5d42e0f9b5bc4e5984dfdf11546232e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:07.197 13:30:24 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:20:07.197 13:30:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:07.197 13:30:24 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:07.197 [ 1]:0x2 00:20:07.197 13:30:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:07.197 13:30:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:07.197 13:30:24 -- target/ns_masking.sh@40 -- # nguid=368803068f814e4dbe9d6481e9011a6c 00:20:07.197 13:30:24 -- target/ns_masking.sh@41 -- # [[ 368803068f814e4dbe9d6481e9011a6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:07.197 13:30:24 -- target/ns_masking.sh@69 -- # disconnect 00:20:07.197 13:30:24 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:07.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:07.197 13:30:24 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:07.764 13:30:24 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:08.022 13:30:25 -- target/ns_masking.sh@77 -- # connect 1 00:20:08.022 13:30:25 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65e748f9-130b-4f88-b43a-98019cf2f451 -a 10.0.0.2 -s 4420 -i 4 00:20:08.022 13:30:25 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:08.022 13:30:25 -- common/autotest_common.sh@1184 -- # local i=0 00:20:08.022 13:30:25 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:08.022 13:30:25 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:20:08.022 13:30:25 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:20:08.022 13:30:25 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:09.924 13:30:27 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:09.924 13:30:27 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:09.924 13:30:27 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.924 13:30:27 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:09.924 13:30:27 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.924 13:30:27 -- common/autotest_common.sh@1194 -- # return 0 00:20:09.924 13:30:27 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:09.924 13:30:27 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:10.183 13:30:27 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:10.183 13:30:27 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:10.183 13:30:27 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:20:10.183 13:30:27 -- common/autotest_common.sh@638 -- # local es=0 00:20:10.183 13:30:27 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:10.183 13:30:27 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:10.183 13:30:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.183 13:30:27 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:10.183 13:30:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.183 13:30:27 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:10.183 13:30:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:10.183 13:30:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:10.183 13:30:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:10.183 13:30:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:10.183 13:30:27 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:10.183 13:30:27 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:10.183 13:30:27 -- common/autotest_common.sh@641 -- # es=1 00:20:10.183 13:30:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:10.183 13:30:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:10.183 13:30:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:10.183 13:30:27 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:20:10.183 13:30:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:10.183 13:30:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:10.183 [ 0]:0x2 00:20:10.183 13:30:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:10.183 13:30:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:10.183 13:30:27 -- target/ns_masking.sh@40 -- # nguid=368803068f814e4dbe9d6481e9011a6c 00:20:10.183 13:30:27 -- target/ns_masking.sh@41 -- # [[ 368803068f814e4dbe9d6481e9011a6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:10.183 13:30:27 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:10.441 13:30:27 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:20:10.441 13:30:27 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:10.441 13:30:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:10.441 [ 0]:0x1 00:20:10.441 13:30:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:10.441 13:30:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:10.699 13:30:27 -- target/ns_masking.sh@40 -- # nguid=5d42e0f9b5bc4e5984dfdf11546232e6 00:20:10.699 13:30:27 -- target/ns_masking.sh@41 -- # [[ 5d42e0f9b5bc4e5984dfdf11546232e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:10.699 13:30:27 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:20:10.699 13:30:27 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:10.699 13:30:27 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:10.699 [ 1]:0x2 00:20:10.699 13:30:27 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:10.699 13:30:27 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:10.699 13:30:27 -- target/ns_masking.sh@40 -- # nguid=368803068f814e4dbe9d6481e9011a6c 00:20:10.699 13:30:27 -- target/ns_masking.sh@41 -- # [[ 368803068f814e4dbe9d6481e9011a6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:10.699 13:30:27 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:10.957 13:30:28 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:20:10.957 13:30:28 -- common/autotest_common.sh@638 -- # local es=0 00:20:10.957 13:30:28 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:10.957 13:30:28 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:10.957 13:30:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.957 13:30:28 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:10.957 13:30:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:10.957 13:30:28 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:10.957 13:30:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:10.957 13:30:28 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:10.957 13:30:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:10.957 13:30:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:10.957 13:30:28 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:10.957 13:30:28 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:10.957 13:30:28 -- common/autotest_common.sh@641 -- # es=1 00:20:10.957 13:30:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:10.957 13:30:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:10.957 13:30:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:10.957 13:30:28 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:20:10.957 13:30:28 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:10.958 13:30:28 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:10.958 [ 0]:0x2 00:20:10.958 13:30:28 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:10.958 13:30:28 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:10.958 13:30:28 -- target/ns_masking.sh@40 -- # nguid=368803068f814e4dbe9d6481e9011a6c 00:20:10.958 13:30:28 -- target/ns_masking.sh@41 -- # [[ 368803068f814e4dbe9d6481e9011a6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:10.958 13:30:28 -- target/ns_masking.sh@91 -- # disconnect 00:20:10.958 13:30:28 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:10.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.216 13:30:28 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:11.475 13:30:28 -- target/ns_masking.sh@95 -- # connect 2 00:20:11.475 13:30:28 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 65e748f9-130b-4f88-b43a-98019cf2f451 -a 10.0.0.2 -s 4420 -i 4 00:20:11.475 13:30:28 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:11.475 13:30:28 -- common/autotest_common.sh@1184 -- # local i=0 00:20:11.475 13:30:28 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:11.475 13:30:28 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:20:11.475 13:30:28 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:20:11.475 13:30:28 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:13.377 13:30:30 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:13.377 13:30:30 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:13.377 13:30:30 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:13.636 13:30:30 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:20:13.636 13:30:30 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:13.636 13:30:30 -- common/autotest_common.sh@1194 -- # return 0 00:20:13.636 13:30:30 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:13.636 13:30:30 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:13.636 13:30:30 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:13.636 13:30:30 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:13.636 13:30:30 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:20:13.636 13:30:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:13.636 13:30:30 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:13.636 [ 0]:0x1 00:20:13.636 13:30:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:13.636 13:30:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:13.636 13:30:30 -- target/ns_masking.sh@40 -- # nguid=5d42e0f9b5bc4e5984dfdf11546232e6 00:20:13.636 13:30:30 -- target/ns_masking.sh@41 -- # [[ 5d42e0f9b5bc4e5984dfdf11546232e6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:13.636 13:30:30 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:20:13.636 13:30:30 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:13.636 13:30:30 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:13.636 [ 1]:0x2 00:20:13.636 13:30:30 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:13.636 13:30:30 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:13.636 13:30:31 -- target/ns_masking.sh@40 -- # nguid=368803068f814e4dbe9d6481e9011a6c 00:20:13.636 13:30:31 -- target/ns_masking.sh@41 -- # [[ 368803068f814e4dbe9d6481e9011a6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:13.636 13:30:31 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:13.902 13:30:31 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:20:13.902 13:30:31 -- common/autotest_common.sh@638 -- # local es=0 00:20:13.902 13:30:31 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:13.902 13:30:31 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:13.902 13:30:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:13.902 13:30:31 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:13.902 13:30:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:13.902 13:30:31 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:13.902 13:30:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:13.902 13:30:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:13.902 13:30:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:13.902 13:30:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:14.178 13:30:31 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:14.178 13:30:31 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:14.178 13:30:31 -- common/autotest_common.sh@641 -- # es=1 00:20:14.178 13:30:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:14.178 13:30:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:14.178 13:30:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:14.178 13:30:31 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:20:14.178 13:30:31 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:14.178 13:30:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:14.178 [ 0]:0x2 00:20:14.178 13:30:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:14.178 13:30:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:14.178 13:30:31 -- target/ns_masking.sh@40 -- # nguid=368803068f814e4dbe9d6481e9011a6c 00:20:14.178 13:30:31 -- target/ns_masking.sh@41 -- # [[ 368803068f814e4dbe9d6481e9011a6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:14.178 13:30:31 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:14.178 13:30:31 -- common/autotest_common.sh@638 -- # local es=0 00:20:14.178 13:30:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:14.178 13:30:31 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.178 13:30:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:14.178 13:30:31 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.178 13:30:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:14.178 13:30:31 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.178 13:30:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:14.178 13:30:31 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:14.178 13:30:31 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:14.178 13:30:31 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:14.436 [2024-04-26 13:30:31.685882] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:14.436 2024/04/26 13:30:31 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:20:14.436 request: 00:20:14.436 { 00:20:14.436 "method": "nvmf_ns_remove_host", 00:20:14.436 "params": { 00:20:14.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.436 "nsid": 2, 00:20:14.436 "host": "nqn.2016-06.io.spdk:host1" 00:20:14.436 } 00:20:14.436 } 00:20:14.436 Got JSON-RPC error response 00:20:14.436 GoRPCClient: error on JSON-RPC call 00:20:14.436 13:30:31 -- common/autotest_common.sh@641 -- # es=1 00:20:14.436 13:30:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:14.436 13:30:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:14.436 13:30:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:14.436 13:30:31 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:20:14.436 13:30:31 -- common/autotest_common.sh@638 -- # local es=0 00:20:14.436 13:30:31 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:20:14.436 13:30:31 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:20:14.436 13:30:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:14.436 13:30:31 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:20:14.436 13:30:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:14.436 13:30:31 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:20:14.436 13:30:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:14.436 13:30:31 -- target/ns_masking.sh@39 -- # grep 0x1 00:20:14.436 13:30:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:14.436 13:30:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:14.436 13:30:31 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:14.436 13:30:31 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:14.436 13:30:31 -- common/autotest_common.sh@641 -- # es=1 00:20:14.436 13:30:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:14.436 13:30:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:14.436 13:30:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:14.436 13:30:31 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:20:14.436 13:30:31 -- target/ns_masking.sh@39 -- # grep 0x2 00:20:14.436 13:30:31 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:14.436 [ 0]:0x2 00:20:14.436 13:30:31 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:14.436 13:30:31 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:14.436 13:30:31 -- target/ns_masking.sh@40 -- # nguid=368803068f814e4dbe9d6481e9011a6c 00:20:14.436 13:30:31 -- target/ns_masking.sh@41 -- # [[ 368803068f814e4dbe9d6481e9011a6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:14.436 13:30:31 -- target/ns_masking.sh@108 -- # disconnect 00:20:14.436 13:30:31 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:14.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:14.436 13:30:31 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.694 13:30:32 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:14.695 13:30:32 -- target/ns_masking.sh@114 -- # nvmftestfini 00:20:14.695 13:30:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:14.695 13:30:32 -- nvmf/common.sh@117 -- # sync 00:20:14.695 13:30:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.695 13:30:32 -- nvmf/common.sh@120 -- # set +e 00:20:14.695 13:30:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.695 13:30:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.695 rmmod nvme_tcp 00:20:14.952 rmmod nvme_fabrics 00:20:14.952 rmmod nvme_keyring 00:20:14.952 13:30:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:14.952 13:30:32 -- nvmf/common.sh@124 -- # set -e 00:20:14.953 13:30:32 -- nvmf/common.sh@125 -- # return 0 00:20:14.953 13:30:32 -- nvmf/common.sh@478 -- # '[' -n 70912 ']' 00:20:14.953 13:30:32 -- nvmf/common.sh@479 -- # killprocess 70912 00:20:14.953 13:30:32 -- common/autotest_common.sh@936 -- # '[' -z 70912 ']' 00:20:14.953 13:30:32 -- common/autotest_common.sh@940 -- # kill -0 70912 00:20:14.953 13:30:32 -- common/autotest_common.sh@941 -- # uname 00:20:14.953 13:30:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:14.953 13:30:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70912 00:20:14.953 13:30:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:14.953 13:30:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:14.953 13:30:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70912' 00:20:14.953 killing process with pid 70912 00:20:14.953 13:30:32 -- common/autotest_common.sh@955 -- # kill 70912 00:20:14.953 13:30:32 -- common/autotest_common.sh@960 -- # wait 70912 00:20:15.211 13:30:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:15.211 13:30:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:15.211 13:30:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:15.211 13:30:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.211 13:30:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.211 13:30:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.211 13:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.211 13:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.211 13:30:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:15.211 00:20:15.211 real 0m14.063s 00:20:15.211 user 0m56.244s 00:20:15.211 sys 0m2.464s 00:20:15.211 13:30:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:15.211 ************************************ 00:20:15.211 END TEST nvmf_ns_masking 00:20:15.211 ************************************ 00:20:15.211 13:30:32 -- common/autotest_common.sh@10 -- # set +x 00:20:15.211 13:30:32 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:20:15.211 13:30:32 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:20:15.211 13:30:32 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:15.211 13:30:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:15.211 13:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.211 13:30:32 -- common/autotest_common.sh@10 -- # set +x 00:20:15.470 ************************************ 00:20:15.470 START TEST nvmf_host_management 00:20:15.470 ************************************ 00:20:15.470 13:30:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:15.470 * Looking for test storage... 00:20:15.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:15.470 13:30:32 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.470 13:30:32 -- nvmf/common.sh@7 -- # uname -s 00:20:15.470 13:30:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.470 13:30:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.470 13:30:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.470 13:30:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.470 13:30:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.470 13:30:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.470 13:30:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.470 13:30:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.470 13:30:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.470 13:30:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.470 13:30:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:15.470 13:30:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:15.470 13:30:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.470 13:30:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.470 13:30:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.470 13:30:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.470 13:30:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.470 13:30:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.470 13:30:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.470 13:30:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.470 13:30:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.470 13:30:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.471 13:30:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.471 13:30:32 -- paths/export.sh@5 -- # export PATH 00:20:15.471 13:30:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.471 13:30:32 -- nvmf/common.sh@47 -- # : 0 00:20:15.471 13:30:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.471 13:30:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.471 13:30:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.471 13:30:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.471 13:30:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.471 13:30:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.471 13:30:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.471 13:30:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.471 13:30:32 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:15.471 13:30:32 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:15.471 13:30:32 -- target/host_management.sh@105 -- # nvmftestinit 00:20:15.471 13:30:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:15.471 13:30:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.471 13:30:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:15.471 13:30:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:15.471 13:30:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:15.471 13:30:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.471 13:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.471 13:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.471 13:30:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:15.471 13:30:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:15.471 13:30:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:15.471 13:30:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:15.471 13:30:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:15.471 13:30:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:15.471 13:30:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.471 13:30:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.471 13:30:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.471 13:30:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:15.471 13:30:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.471 13:30:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.471 13:30:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.471 13:30:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.471 13:30:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.471 13:30:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.471 13:30:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.471 13:30:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.471 13:30:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:15.471 13:30:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:15.471 Cannot find device "nvmf_tgt_br" 00:20:15.471 13:30:32 -- nvmf/common.sh@155 -- # true 00:20:15.471 13:30:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.471 Cannot find device "nvmf_tgt_br2" 00:20:15.471 13:30:32 -- nvmf/common.sh@156 -- # true 00:20:15.471 13:30:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:15.471 13:30:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:15.471 Cannot find device "nvmf_tgt_br" 00:20:15.471 13:30:32 -- nvmf/common.sh@158 -- # true 00:20:15.471 13:30:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:15.471 Cannot find device "nvmf_tgt_br2" 00:20:15.471 13:30:32 -- nvmf/common.sh@159 -- # true 00:20:15.471 13:30:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:15.730 13:30:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:15.730 13:30:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.730 13:30:32 -- nvmf/common.sh@162 -- # true 00:20:15.730 13:30:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.730 13:30:32 -- nvmf/common.sh@163 -- # true 00:20:15.730 13:30:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.730 13:30:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:15.730 13:30:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:15.730 13:30:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:15.730 13:30:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:15.730 13:30:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:15.730 13:30:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:15.730 13:30:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:15.730 13:30:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:15.730 13:30:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:15.730 13:30:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:15.730 13:30:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:15.730 13:30:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:15.730 13:30:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:15.730 13:30:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:15.730 13:30:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:15.730 13:30:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:15.730 13:30:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:15.730 13:30:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:15.730 13:30:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:15.730 13:30:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:15.730 13:30:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:15.988 13:30:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:15.988 13:30:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:15.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:20:15.988 00:20:15.988 --- 10.0.0.2 ping statistics --- 00:20:15.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.988 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:20:15.988 13:30:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:15.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:15.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:20:15.988 00:20:15.988 --- 10.0.0.3 ping statistics --- 00:20:15.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.988 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:15.988 13:30:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:15.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:15.988 00:20:15.988 --- 10.0.0.1 ping statistics --- 00:20:15.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.988 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:15.988 13:30:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.988 13:30:33 -- nvmf/common.sh@422 -- # return 0 00:20:15.988 13:30:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:15.988 13:30:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.988 13:30:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:15.988 13:30:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:15.988 13:30:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.988 13:30:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:15.988 13:30:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:15.988 13:30:33 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:20:15.988 13:30:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:15.988 13:30:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:15.988 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:20:15.988 ************************************ 00:20:15.988 START TEST nvmf_host_management 00:20:15.988 ************************************ 00:20:15.988 13:30:33 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:20:15.988 13:30:33 -- target/host_management.sh@69 -- # starttarget 00:20:15.988 13:30:33 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:15.988 13:30:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:15.988 13:30:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:15.988 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:20:15.988 13:30:33 -- nvmf/common.sh@470 -- # nvmfpid=71492 00:20:15.988 13:30:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:15.988 13:30:33 -- nvmf/common.sh@471 -- # waitforlisten 71492 00:20:15.988 13:30:33 -- common/autotest_common.sh@817 -- # '[' -z 71492 ']' 00:20:15.988 13:30:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.988 13:30:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.988 13:30:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.988 13:30:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.988 13:30:33 -- common/autotest_common.sh@10 -- # set +x 00:20:15.988 [2024-04-26 13:30:33.350903] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:15.988 [2024-04-26 13:30:33.351007] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.246 [2024-04-26 13:30:33.485284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.246 [2024-04-26 13:30:33.607157] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.246 [2024-04-26 13:30:33.607224] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.246 [2024-04-26 13:30:33.607237] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.246 [2024-04-26 13:30:33.607245] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.246 [2024-04-26 13:30:33.607253] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.246 [2024-04-26 13:30:33.607409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.246 [2024-04-26 13:30:33.607637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:16.246 [2024-04-26 13:30:33.607638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.246 [2024-04-26 13:30:33.608266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.188 13:30:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:17.189 13:30:34 -- common/autotest_common.sh@850 -- # return 0 00:20:17.189 13:30:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:17.189 13:30:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.189 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 13:30:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.189 13:30:34 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.189 13:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.189 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 [2024-04-26 13:30:34.363310] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.189 13:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.189 13:30:34 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:17.189 13:30:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:17.189 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 13:30:34 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:17.189 13:30:34 -- target/host_management.sh@23 -- # cat 00:20:17.189 13:30:34 -- target/host_management.sh@30 -- # rpc_cmd 00:20:17.189 13:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:17.189 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 Malloc0 00:20:17.189 [2024-04-26 13:30:34.438158] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.189 13:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:17.189 13:30:34 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:17.189 13:30:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:17.189 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.189 13:30:34 -- target/host_management.sh@73 -- # perfpid=71569 00:20:17.189 13:30:34 -- target/host_management.sh@74 -- # waitforlisten 71569 /var/tmp/bdevperf.sock 00:20:17.189 13:30:34 -- common/autotest_common.sh@817 -- # '[' -z 71569 ']' 00:20:17.189 13:30:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.189 13:30:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:17.189 13:30:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.189 13:30:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:17.189 13:30:34 -- common/autotest_common.sh@10 -- # set +x 00:20:17.189 13:30:34 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:17.189 13:30:34 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:17.189 13:30:34 -- nvmf/common.sh@521 -- # config=() 00:20:17.189 13:30:34 -- nvmf/common.sh@521 -- # local subsystem config 00:20:17.189 13:30:34 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:17.189 13:30:34 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:17.189 { 00:20:17.189 "params": { 00:20:17.189 "name": "Nvme$subsystem", 00:20:17.189 "trtype": "$TEST_TRANSPORT", 00:20:17.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:17.189 "adrfam": "ipv4", 00:20:17.189 "trsvcid": "$NVMF_PORT", 00:20:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:17.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:17.189 "hdgst": ${hdgst:-false}, 00:20:17.189 "ddgst": ${ddgst:-false} 00:20:17.189 }, 00:20:17.189 "method": "bdev_nvme_attach_controller" 00:20:17.189 } 00:20:17.189 EOF 00:20:17.189 )") 00:20:17.189 13:30:34 -- nvmf/common.sh@543 -- # cat 00:20:17.189 13:30:34 -- nvmf/common.sh@545 -- # jq . 00:20:17.189 13:30:34 -- nvmf/common.sh@546 -- # IFS=, 00:20:17.189 13:30:34 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:17.189 "params": { 00:20:17.189 "name": "Nvme0", 00:20:17.189 "trtype": "tcp", 00:20:17.189 "traddr": "10.0.0.2", 00:20:17.189 "adrfam": "ipv4", 00:20:17.189 "trsvcid": "4420", 00:20:17.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:17.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:17.189 "hdgst": false, 00:20:17.189 "ddgst": false 00:20:17.189 }, 00:20:17.189 "method": "bdev_nvme_attach_controller" 00:20:17.189 }' 00:20:17.189 [2024-04-26 13:30:34.543770] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:17.189 [2024-04-26 13:30:34.543944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71569 ] 00:20:17.447 [2024-04-26 13:30:34.686513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.447 [2024-04-26 13:30:34.815013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.705 Running I/O for 10 seconds... 00:20:18.273 13:30:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:18.273 13:30:35 -- common/autotest_common.sh@850 -- # return 0 00:20:18.273 13:30:35 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:18.273 13:30:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.273 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:20:18.273 13:30:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.273 13:30:35 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.273 13:30:35 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:18.273 13:30:35 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:18.273 13:30:35 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:18.273 13:30:35 -- target/host_management.sh@52 -- # local ret=1 00:20:18.273 13:30:35 -- target/host_management.sh@53 -- # local i 00:20:18.273 13:30:35 -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:18.273 13:30:35 -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:18.273 13:30:35 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:18.273 13:30:35 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:18.273 13:30:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.273 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:20:18.273 13:30:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.273 13:30:35 -- target/host_management.sh@55 -- # read_io_count=961 00:20:18.273 13:30:35 -- target/host_management.sh@58 -- # '[' 961 -ge 100 ']' 00:20:18.273 13:30:35 -- target/host_management.sh@59 -- # ret=0 00:20:18.273 13:30:35 -- target/host_management.sh@60 -- # break 00:20:18.273 13:30:35 -- target/host_management.sh@64 -- # return 0 00:20:18.273 13:30:35 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:18.273 13:30:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.273 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:20:18.273 [2024-04-26 13:30:35.703893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.273 [2024-04-26 13:30:35.703948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.273 [2024-04-26 13:30:35.703976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.273 [2024-04-26 13:30:35.703988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.273 [2024-04-26 13:30:35.704000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.273 [2024-04-26 13:30:35.704009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.273 [2024-04-26 13:30:35.704021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.273 [2024-04-26 13:30:35.704030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.273 [2024-04-26 13:30:35.704042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.273 [2024-04-26 13:30:35.704052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.273 [2024-04-26 13:30:35.704063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.273 [2024-04-26 13:30:35.704072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.273 [2024-04-26 13:30:35.704084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.273 [2024-04-26 13:30:35.704094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.273 [2024-04-26 13:30:35.704105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.274 [2024-04-26 13:30:35.704960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.274 [2024-04-26 13:30:35.704971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.704980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.704991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:18.275 [2024-04-26 13:30:35.705320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:18.275 [2024-04-26 13:30:35.705420] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc7e500 was disconnected and freed. reset controller. 00:20:18.275 [2024-04-26 13:30:35.705522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.275 [2024-04-26 13:30:35.705538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.275 [2024-04-26 13:30:35.705559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.275 [2024-04-26 13:30:35.705578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.275 [2024-04-26 13:30:35.705597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.705606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc826c0 is same with the state(5) to be set 00:20:18.275 [2024-04-26 13:30:35.707202] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:18.275 task offset: 0 on job bdev=Nvme0n1 fails 00:20:18.275 00:20:18.275 Latency(us) 00:20:18.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.275 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:18.275 Job: Nvme0n1 ended in about 0.71 seconds with error 00:20:18.275 Verification LBA range: start 0x0 length 0x400 00:20:18.275 Nvme0n1 : 0.71 1452.02 90.75 90.75 0.00 40470.32 2889.54 37891.72 00:20:18.275 =================================================================================================================== 00:20:18.275 Total : 1452.02 90.75 90.75 0.00 40470.32 2889.54 37891.72 00:20:18.275 [2024-04-26 13:30:35.709413] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:18.275 [2024-04-26 13:30:35.709546] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc826c0 (9): Bad file descriptor 00:20:18.275 13:30:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.275 13:30:35 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:18.275 13:30:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.275 13:30:35 -- common/autotest_common.sh@10 -- # set +x 00:20:18.275 [2024-04-26 13:30:35.718751] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:20:18.275 [2024-04-26 13:30:35.719056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:18.275 [2024-04-26 13:30:35.719215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.275 [2024-04-26 13:30:35.719366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:20:18.275 [2024-04-26 13:30:35.719505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:20:18.534 [2024-04-26 13:30:35.719630] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:18.534 [2024-04-26 13:30:35.719645] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc826c0 00:20:18.534 [2024-04-26 13:30:35.719693] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc826c0 (9): Bad file descriptor 00:20:18.534 [2024-04-26 13:30:35.719714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:18.534 [2024-04-26 13:30:35.719724] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:18.534 [2024-04-26 13:30:35.719734] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:18.534 [2024-04-26 13:30:35.719752] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:18.534 13:30:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.534 13:30:35 -- target/host_management.sh@87 -- # sleep 1 00:20:19.470 13:30:36 -- target/host_management.sh@91 -- # kill -9 71569 00:20:19.470 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71569) - No such process 00:20:19.470 13:30:36 -- target/host_management.sh@91 -- # true 00:20:19.470 13:30:36 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:19.470 13:30:36 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:19.470 13:30:36 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:19.470 13:30:36 -- nvmf/common.sh@521 -- # config=() 00:20:19.470 13:30:36 -- nvmf/common.sh@521 -- # local subsystem config 00:20:19.470 13:30:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:19.470 13:30:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:19.470 { 00:20:19.470 "params": { 00:20:19.470 "name": "Nvme$subsystem", 00:20:19.470 "trtype": "$TEST_TRANSPORT", 00:20:19.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.470 "adrfam": "ipv4", 00:20:19.470 "trsvcid": "$NVMF_PORT", 00:20:19.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.470 "hdgst": ${hdgst:-false}, 00:20:19.470 "ddgst": ${ddgst:-false} 00:20:19.470 }, 00:20:19.470 "method": "bdev_nvme_attach_controller" 00:20:19.470 } 00:20:19.470 EOF 00:20:19.470 )") 00:20:19.470 13:30:36 -- nvmf/common.sh@543 -- # cat 00:20:19.470 13:30:36 -- nvmf/common.sh@545 -- # jq . 00:20:19.470 13:30:36 -- nvmf/common.sh@546 -- # IFS=, 00:20:19.470 13:30:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:19.470 "params": { 00:20:19.470 "name": "Nvme0", 00:20:19.470 "trtype": "tcp", 00:20:19.470 "traddr": "10.0.0.2", 00:20:19.470 "adrfam": "ipv4", 00:20:19.470 "trsvcid": "4420", 00:20:19.470 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.470 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.470 "hdgst": false, 00:20:19.470 "ddgst": false 00:20:19.470 }, 00:20:19.470 "method": "bdev_nvme_attach_controller" 00:20:19.470 }' 00:20:19.470 [2024-04-26 13:30:36.810467] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:19.470 [2024-04-26 13:30:36.810625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71619 ] 00:20:19.729 [2024-04-26 13:30:36.957681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.729 [2024-04-26 13:30:37.088744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.987 Running I/O for 1 seconds... 00:20:20.921 00:20:20.921 Latency(us) 00:20:20.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.921 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:20.921 Verification LBA range: start 0x0 length 0x400 00:20:20.921 Nvme0n1 : 1.03 1498.09 93.63 0.00 0.00 41794.12 5987.61 43134.60 00:20:20.921 =================================================================================================================== 00:20:20.921 Total : 1498.09 93.63 0.00 0.00 41794.12 5987.61 43134.60 00:20:21.179 13:30:38 -- target/host_management.sh@102 -- # stoptarget 00:20:21.179 13:30:38 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:21.179 13:30:38 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:20:21.179 13:30:38 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:21.179 13:30:38 -- target/host_management.sh@40 -- # nvmftestfini 00:20:21.179 13:30:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:21.179 13:30:38 -- nvmf/common.sh@117 -- # sync 00:20:21.179 13:30:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.179 13:30:38 -- nvmf/common.sh@120 -- # set +e 00:20:21.179 13:30:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.179 13:30:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.179 rmmod nvme_tcp 00:20:21.179 rmmod nvme_fabrics 00:20:21.179 rmmod nvme_keyring 00:20:21.440 13:30:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.440 13:30:38 -- nvmf/common.sh@124 -- # set -e 00:20:21.440 13:30:38 -- nvmf/common.sh@125 -- # return 0 00:20:21.440 13:30:38 -- nvmf/common.sh@478 -- # '[' -n 71492 ']' 00:20:21.440 13:30:38 -- nvmf/common.sh@479 -- # killprocess 71492 00:20:21.440 13:30:38 -- common/autotest_common.sh@936 -- # '[' -z 71492 ']' 00:20:21.440 13:30:38 -- common/autotest_common.sh@940 -- # kill -0 71492 00:20:21.440 13:30:38 -- common/autotest_common.sh@941 -- # uname 00:20:21.440 13:30:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.440 13:30:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71492 00:20:21.440 killing process with pid 71492 00:20:21.440 13:30:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:21.440 13:30:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:21.440 13:30:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71492' 00:20:21.440 13:30:38 -- common/autotest_common.sh@955 -- # kill 71492 00:20:21.440 13:30:38 -- common/autotest_common.sh@960 -- # wait 71492 00:20:21.707 [2024-04-26 13:30:38.926635] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:21.707 13:30:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:21.707 13:30:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:21.707 13:30:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:21.707 13:30:38 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.707 13:30:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.707 13:30:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.707 13:30:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.707 13:30:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.707 13:30:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:21.707 00:20:21.707 real 0m5.702s 00:20:21.707 user 0m24.071s 00:20:21.707 sys 0m1.301s 00:20:21.707 13:30:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:21.707 13:30:38 -- common/autotest_common.sh@10 -- # set +x 00:20:21.707 ************************************ 00:20:21.707 END TEST nvmf_host_management 00:20:21.707 ************************************ 00:20:21.707 13:30:39 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:21.707 00:20:21.707 real 0m6.334s 00:20:21.707 user 0m24.211s 00:20:21.707 sys 0m1.597s 00:20:21.707 13:30:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:21.707 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:20:21.707 ************************************ 00:20:21.707 END TEST nvmf_host_management 00:20:21.707 ************************************ 00:20:21.707 13:30:39 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:21.707 13:30:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:21.707 13:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:21.707 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:20:21.707 ************************************ 00:20:21.707 START TEST nvmf_lvol 00:20:21.707 ************************************ 00:20:21.707 13:30:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:21.966 * Looking for test storage... 00:20:21.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:21.966 13:30:39 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.966 13:30:39 -- nvmf/common.sh@7 -- # uname -s 00:20:21.966 13:30:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.966 13:30:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.966 13:30:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.966 13:30:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.966 13:30:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.966 13:30:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.966 13:30:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.966 13:30:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.966 13:30:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.966 13:30:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.966 13:30:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:21.966 13:30:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:21.966 13:30:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.966 13:30:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.966 13:30:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.966 13:30:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.966 13:30:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.966 13:30:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.966 13:30:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.966 13:30:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.966 13:30:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.966 13:30:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.966 13:30:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.966 13:30:39 -- paths/export.sh@5 -- # export PATH 00:20:21.967 13:30:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.967 13:30:39 -- nvmf/common.sh@47 -- # : 0 00:20:21.967 13:30:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.967 13:30:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.967 13:30:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.967 13:30:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.967 13:30:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.967 13:30:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.967 13:30:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.967 13:30:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.967 13:30:39 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.967 13:30:39 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.967 13:30:39 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:21.967 13:30:39 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:21.967 13:30:39 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:21.967 13:30:39 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:21.967 13:30:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:21.967 13:30:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.967 13:30:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:21.967 13:30:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:21.967 13:30:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:21.967 13:30:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.967 13:30:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.967 13:30:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.967 13:30:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:21.967 13:30:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:21.967 13:30:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:21.967 13:30:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:21.967 13:30:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:21.967 13:30:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:21.967 13:30:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.967 13:30:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.967 13:30:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.967 13:30:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:21.967 13:30:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.967 13:30:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.967 13:30:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.967 13:30:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.967 13:30:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.967 13:30:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.967 13:30:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.967 13:30:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.967 13:30:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:21.967 13:30:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:21.967 Cannot find device "nvmf_tgt_br" 00:20:21.967 13:30:39 -- nvmf/common.sh@155 -- # true 00:20:21.967 13:30:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.967 Cannot find device "nvmf_tgt_br2" 00:20:21.967 13:30:39 -- nvmf/common.sh@156 -- # true 00:20:21.967 13:30:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:21.967 13:30:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:21.967 Cannot find device "nvmf_tgt_br" 00:20:21.967 13:30:39 -- nvmf/common.sh@158 -- # true 00:20:21.967 13:30:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:21.967 Cannot find device "nvmf_tgt_br2" 00:20:21.967 13:30:39 -- nvmf/common.sh@159 -- # true 00:20:21.967 13:30:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:21.967 13:30:39 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:21.967 13:30:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.967 13:30:39 -- nvmf/common.sh@162 -- # true 00:20:21.967 13:30:39 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.226 13:30:39 -- nvmf/common.sh@163 -- # true 00:20:22.226 13:30:39 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.226 13:30:39 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.226 13:30:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.226 13:30:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.226 13:30:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.226 13:30:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.226 13:30:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.226 13:30:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:22.226 13:30:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:22.226 13:30:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:22.226 13:30:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:22.226 13:30:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:22.226 13:30:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:22.226 13:30:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.226 13:30:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.226 13:30:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.226 13:30:39 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:22.226 13:30:39 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:22.226 13:30:39 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:22.226 13:30:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:22.226 13:30:39 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.226 13:30:39 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.226 13:30:39 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.226 13:30:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:22.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:20:22.226 00:20:22.226 --- 10.0.0.2 ping statistics --- 00:20:22.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.226 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:22.226 13:30:39 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:22.226 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.226 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:20:22.227 00:20:22.227 --- 10.0.0.3 ping statistics --- 00:20:22.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.227 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:22.227 13:30:39 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:22.227 00:20:22.227 --- 10.0.0.1 ping statistics --- 00:20:22.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.227 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:22.227 13:30:39 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.227 13:30:39 -- nvmf/common.sh@422 -- # return 0 00:20:22.227 13:30:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:22.227 13:30:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.227 13:30:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:22.227 13:30:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:22.227 13:30:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.227 13:30:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:22.227 13:30:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:22.227 13:30:39 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:22.227 13:30:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:22.227 13:30:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:22.227 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:20:22.227 13:30:39 -- nvmf/common.sh@470 -- # nvmfpid=71854 00:20:22.227 13:30:39 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:22.227 13:30:39 -- nvmf/common.sh@471 -- # waitforlisten 71854 00:20:22.227 13:30:39 -- common/autotest_common.sh@817 -- # '[' -z 71854 ']' 00:20:22.227 13:30:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.227 13:30:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.227 13:30:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.227 13:30:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.227 13:30:39 -- common/autotest_common.sh@10 -- # set +x 00:20:22.485 [2024-04-26 13:30:39.710643] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:22.485 [2024-04-26 13:30:39.711070] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.485 [2024-04-26 13:30:39.846217] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:22.742 [2024-04-26 13:30:39.967115] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.743 [2024-04-26 13:30:39.967376] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.743 [2024-04-26 13:30:39.967548] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.743 [2024-04-26 13:30:39.967603] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.743 [2024-04-26 13:30:39.967704] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.743 [2024-04-26 13:30:39.967914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.743 [2024-04-26 13:30:39.968059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.743 [2024-04-26 13:30:39.968059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.698 13:30:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.698 13:30:40 -- common/autotest_common.sh@850 -- # return 0 00:20:23.698 13:30:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:23.698 13:30:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:23.698 13:30:40 -- common/autotest_common.sh@10 -- # set +x 00:20:23.698 13:30:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.698 13:30:40 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:23.698 [2024-04-26 13:30:41.084759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.698 13:30:41 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:24.262 13:30:41 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:24.262 13:30:41 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:24.519 13:30:41 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:24.519 13:30:41 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:24.777 13:30:42 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:25.035 13:30:42 -- target/nvmf_lvol.sh@29 -- # lvs=9323aca7-040f-409d-bdfb-ee218011bad4 00:20:25.035 13:30:42 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9323aca7-040f-409d-bdfb-ee218011bad4 lvol 20 00:20:25.294 13:30:42 -- target/nvmf_lvol.sh@32 -- # lvol=2ebabf51-e606-4803-9585-9ee825a4cc85 00:20:25.294 13:30:42 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:25.859 13:30:43 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2ebabf51-e606-4803-9585-9ee825a4cc85 00:20:25.859 13:30:43 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:26.117 [2024-04-26 13:30:43.472253] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.117 13:30:43 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:26.430 13:30:43 -- target/nvmf_lvol.sh@42 -- # perf_pid=72008 00:20:26.430 13:30:43 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:26.430 13:30:43 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:27.376 13:30:44 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 2ebabf51-e606-4803-9585-9ee825a4cc85 MY_SNAPSHOT 00:20:27.940 13:30:45 -- target/nvmf_lvol.sh@47 -- # snapshot=d152df51-2a66-4fe4-b426-13747ea98705 00:20:27.940 13:30:45 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 2ebabf51-e606-4803-9585-9ee825a4cc85 30 00:20:28.198 13:30:45 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d152df51-2a66-4fe4-b426-13747ea98705 MY_CLONE 00:20:28.455 13:30:45 -- target/nvmf_lvol.sh@49 -- # clone=da860008-1e8f-4f03-9924-8e2e03cf9e59 00:20:28.455 13:30:45 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate da860008-1e8f-4f03-9924-8e2e03cf9e59 00:20:29.022 13:30:46 -- target/nvmf_lvol.sh@53 -- # wait 72008 00:20:37.135 Initializing NVMe Controllers 00:20:37.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:37.135 Controller IO queue size 128, less than required. 00:20:37.135 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:20:37.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:20:37.135 Initialization complete. Launching workers. 00:20:37.135 ======================================================== 00:20:37.135 Latency(us) 00:20:37.135 Device Information : IOPS MiB/s Average min max 00:20:37.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10127.20 39.56 12645.77 2782.18 136151.21 00:20:37.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10075.60 39.36 12706.22 3707.54 63160.11 00:20:37.135 ======================================================== 00:20:37.135 Total : 20202.79 78.92 12675.91 2782.18 136151.21 00:20:37.135 00:20:37.135 13:30:54 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.135 13:30:54 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2ebabf51-e606-4803-9585-9ee825a4cc85 00:20:37.394 13:30:54 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9323aca7-040f-409d-bdfb-ee218011bad4 00:20:37.652 13:30:54 -- target/nvmf_lvol.sh@60 -- # rm -f 00:20:37.652 13:30:54 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:20:37.652 13:30:54 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:20:37.652 13:30:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:37.652 13:30:55 -- nvmf/common.sh@117 -- # sync 00:20:37.652 13:30:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:37.652 13:30:55 -- nvmf/common.sh@120 -- # set +e 00:20:37.652 13:30:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:37.652 13:30:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:37.652 rmmod nvme_tcp 00:20:37.652 rmmod nvme_fabrics 00:20:37.652 rmmod nvme_keyring 00:20:37.652 13:30:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:37.652 13:30:55 -- nvmf/common.sh@124 -- # set -e 00:20:37.652 13:30:55 -- nvmf/common.sh@125 -- # return 0 00:20:37.652 13:30:55 -- nvmf/common.sh@478 -- # '[' -n 71854 ']' 00:20:37.652 13:30:55 -- nvmf/common.sh@479 -- # killprocess 71854 00:20:37.652 13:30:55 -- common/autotest_common.sh@936 -- # '[' -z 71854 ']' 00:20:37.652 13:30:55 -- common/autotest_common.sh@940 -- # kill -0 71854 00:20:37.652 13:30:55 -- common/autotest_common.sh@941 -- # uname 00:20:37.652 13:30:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:37.652 13:30:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71854 00:20:37.911 killing process with pid 71854 00:20:37.911 13:30:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:37.911 13:30:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:37.911 13:30:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71854' 00:20:37.911 13:30:55 -- common/autotest_common.sh@955 -- # kill 71854 00:20:37.911 13:30:55 -- common/autotest_common.sh@960 -- # wait 71854 00:20:38.170 13:30:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:38.170 13:30:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:38.170 13:30:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:38.170 13:30:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.170 13:30:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:38.170 13:30:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.170 13:30:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.170 13:30:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.170 13:30:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:38.170 00:20:38.170 real 0m16.317s 00:20:38.170 user 1m7.623s 00:20:38.170 sys 0m4.096s 00:20:38.170 13:30:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:38.170 13:30:55 -- common/autotest_common.sh@10 -- # set +x 00:20:38.170 ************************************ 00:20:38.170 END TEST nvmf_lvol 00:20:38.170 ************************************ 00:20:38.170 13:30:55 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:38.170 13:30:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:38.170 13:30:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:38.170 13:30:55 -- common/autotest_common.sh@10 -- # set +x 00:20:38.170 ************************************ 00:20:38.170 START TEST nvmf_lvs_grow 00:20:38.170 ************************************ 00:20:38.170 13:30:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:38.430 * Looking for test storage... 00:20:38.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:38.430 13:30:55 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.430 13:30:55 -- nvmf/common.sh@7 -- # uname -s 00:20:38.430 13:30:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.430 13:30:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.430 13:30:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.430 13:30:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.430 13:30:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.430 13:30:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.430 13:30:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.430 13:30:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.430 13:30:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.430 13:30:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.430 13:30:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:38.430 13:30:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:20:38.430 13:30:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.430 13:30:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.430 13:30:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.430 13:30:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.430 13:30:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.430 13:30:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.430 13:30:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.430 13:30:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.430 13:30:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.430 13:30:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.430 13:30:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.430 13:30:55 -- paths/export.sh@5 -- # export PATH 00:20:38.430 13:30:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.430 13:30:55 -- nvmf/common.sh@47 -- # : 0 00:20:38.430 13:30:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.430 13:30:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.430 13:30:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.430 13:30:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.430 13:30:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.430 13:30:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.430 13:30:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.430 13:30:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.431 13:30:55 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:38.431 13:30:55 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:38.431 13:30:55 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:20:38.431 13:30:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:38.431 13:30:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.431 13:30:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:38.431 13:30:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:38.431 13:30:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:38.431 13:30:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.431 13:30:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.431 13:30:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.431 13:30:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:38.431 13:30:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:38.431 13:30:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:38.431 13:30:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:38.431 13:30:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:38.431 13:30:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:38.431 13:30:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.431 13:30:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.431 13:30:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:38.431 13:30:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:38.431 13:30:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.431 13:30:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.431 13:30:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.431 13:30:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.431 13:30:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.431 13:30:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.431 13:30:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.431 13:30:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.431 13:30:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:38.431 13:30:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:38.431 Cannot find device "nvmf_tgt_br" 00:20:38.431 13:30:55 -- nvmf/common.sh@155 -- # true 00:20:38.431 13:30:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.431 Cannot find device "nvmf_tgt_br2" 00:20:38.431 13:30:55 -- nvmf/common.sh@156 -- # true 00:20:38.431 13:30:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:38.431 13:30:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:38.431 Cannot find device "nvmf_tgt_br" 00:20:38.431 13:30:55 -- nvmf/common.sh@158 -- # true 00:20:38.431 13:30:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:38.431 Cannot find device "nvmf_tgt_br2" 00:20:38.431 13:30:55 -- nvmf/common.sh@159 -- # true 00:20:38.431 13:30:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:38.431 13:30:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:38.431 13:30:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.431 13:30:55 -- nvmf/common.sh@162 -- # true 00:20:38.431 13:30:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.431 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.431 13:30:55 -- nvmf/common.sh@163 -- # true 00:20:38.431 13:30:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.431 13:30:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.431 13:30:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:38.431 13:30:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:38.431 13:30:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:38.690 13:30:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:38.690 13:30:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:38.690 13:30:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:38.690 13:30:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:38.690 13:30:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:38.690 13:30:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:38.690 13:30:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:38.690 13:30:55 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:38.690 13:30:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:38.690 13:30:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:38.690 13:30:55 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:38.690 13:30:55 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:38.690 13:30:55 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:38.690 13:30:55 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:38.690 13:30:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.690 13:30:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.690 13:30:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.690 13:30:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:38.690 13:30:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:38.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:20:38.690 00:20:38.690 --- 10.0.0.2 ping statistics --- 00:20:38.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.690 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:38.690 13:30:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:38.690 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:38.690 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:38.690 00:20:38.690 --- 10.0.0.3 ping statistics --- 00:20:38.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.690 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:38.690 13:30:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:38.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:38.690 00:20:38.690 --- 10.0.0.1 ping statistics --- 00:20:38.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.690 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:38.690 13:30:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.690 13:30:56 -- nvmf/common.sh@422 -- # return 0 00:20:38.690 13:30:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:38.690 13:30:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.690 13:30:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:38.690 13:30:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:38.690 13:30:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.690 13:30:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:38.690 13:30:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:38.690 13:30:56 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:20:38.690 13:30:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:38.690 13:30:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:38.690 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:38.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.690 13:30:56 -- nvmf/common.sh@470 -- # nvmfpid=72379 00:20:38.690 13:30:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:38.690 13:30:56 -- nvmf/common.sh@471 -- # waitforlisten 72379 00:20:38.690 13:30:56 -- common/autotest_common.sh@817 -- # '[' -z 72379 ']' 00:20:38.690 13:30:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.690 13:30:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:38.690 13:30:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.690 13:30:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:38.690 13:30:56 -- common/autotest_common.sh@10 -- # set +x 00:20:38.949 [2024-04-26 13:30:56.140347] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:38.949 [2024-04-26 13:30:56.140735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.949 [2024-04-26 13:30:56.281896] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.207 [2024-04-26 13:30:56.421009] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.207 [2024-04-26 13:30:56.421266] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.207 [2024-04-26 13:30:56.421433] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.207 [2024-04-26 13:30:56.421637] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.207 [2024-04-26 13:30:56.421680] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.207 [2024-04-26 13:30:56.421860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.893 13:30:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:39.893 13:30:57 -- common/autotest_common.sh@850 -- # return 0 00:20:39.893 13:30:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:39.893 13:30:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:39.893 13:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:39.893 13:30:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.893 13:30:57 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:40.152 [2024-04-26 13:30:57.510937] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.152 13:30:57 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:20:40.152 13:30:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:40.152 13:30:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:40.152 13:30:57 -- common/autotest_common.sh@10 -- # set +x 00:20:40.411 ************************************ 00:20:40.411 START TEST lvs_grow_clean 00:20:40.411 ************************************ 00:20:40.411 13:30:57 -- common/autotest_common.sh@1111 -- # lvs_grow 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:40.411 13:30:57 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:40.670 13:30:57 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:40.670 13:30:57 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:40.929 13:30:58 -- target/nvmf_lvs_grow.sh@28 -- # lvs=dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:40.929 13:30:58 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:40.929 13:30:58 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:41.188 13:30:58 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:41.188 13:30:58 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:41.188 13:30:58 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u dbca1b28-140f-4f42-8b2b-114cb897d860 lvol 150 00:20:41.448 13:30:58 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d4537b9c-f016-4767-964a-18ca6d49d86e 00:20:41.448 13:30:58 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:41.448 13:30:58 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:41.707 [2024-04-26 13:30:59.032835] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:41.707 [2024-04-26 13:30:59.032937] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:41.707 true 00:20:41.707 13:30:59 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:41.707 13:30:59 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:41.966 13:30:59 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:41.966 13:30:59 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:42.226 13:30:59 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4537b9c-f016-4767-964a-18ca6d49d86e 00:20:42.485 13:30:59 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.776 [2024-04-26 13:31:00.114434] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.776 13:31:00 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:43.035 13:31:00 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72552 00:20:43.035 13:31:00 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:43.035 13:31:00 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:43.035 13:31:00 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72552 /var/tmp/bdevperf.sock 00:20:43.035 13:31:00 -- common/autotest_common.sh@817 -- # '[' -z 72552 ']' 00:20:43.035 13:31:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.035 13:31:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.035 13:31:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.035 13:31:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.035 13:31:00 -- common/autotest_common.sh@10 -- # set +x 00:20:43.035 [2024-04-26 13:31:00.441145] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:20:43.035 [2024-04-26 13:31:00.441257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72552 ] 00:20:43.294 [2024-04-26 13:31:00.577904] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.294 [2024-04-26 13:31:00.707556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.230 13:31:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:44.230 13:31:01 -- common/autotest_common.sh@850 -- # return 0 00:20:44.230 13:31:01 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:44.488 Nvme0n1 00:20:44.488 13:31:01 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:44.747 [ 00:20:44.747 { 00:20:44.747 "aliases": [ 00:20:44.747 "d4537b9c-f016-4767-964a-18ca6d49d86e" 00:20:44.747 ], 00:20:44.747 "assigned_rate_limits": { 00:20:44.747 "r_mbytes_per_sec": 0, 00:20:44.747 "rw_ios_per_sec": 0, 00:20:44.747 "rw_mbytes_per_sec": 0, 00:20:44.747 "w_mbytes_per_sec": 0 00:20:44.747 }, 00:20:44.747 "block_size": 4096, 00:20:44.747 "claimed": false, 00:20:44.747 "driver_specific": { 00:20:44.747 "mp_policy": "active_passive", 00:20:44.747 "nvme": [ 00:20:44.747 { 00:20:44.747 "ctrlr_data": { 00:20:44.747 "ana_reporting": false, 00:20:44.747 "cntlid": 1, 00:20:44.747 "firmware_revision": "24.05", 00:20:44.747 "model_number": "SPDK bdev Controller", 00:20:44.747 "multi_ctrlr": true, 00:20:44.747 "oacs": { 00:20:44.747 "firmware": 0, 00:20:44.747 "format": 0, 00:20:44.747 "ns_manage": 0, 00:20:44.747 "security": 0 00:20:44.747 }, 00:20:44.747 "serial_number": "SPDK0", 00:20:44.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.747 "vendor_id": "0x8086" 00:20:44.747 }, 00:20:44.747 "ns_data": { 00:20:44.747 "can_share": true, 00:20:44.747 "id": 1 00:20:44.747 }, 00:20:44.747 "trid": { 00:20:44.747 "adrfam": "IPv4", 00:20:44.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.747 "traddr": "10.0.0.2", 00:20:44.747 "trsvcid": "4420", 00:20:44.747 "trtype": "TCP" 00:20:44.747 }, 00:20:44.747 "vs": { 00:20:44.747 "nvme_version": "1.3" 00:20:44.747 } 00:20:44.747 } 00:20:44.747 ] 00:20:44.747 }, 00:20:44.747 "memory_domains": [ 00:20:44.747 { 00:20:44.747 "dma_device_id": "system", 00:20:44.747 "dma_device_type": 1 00:20:44.747 } 00:20:44.747 ], 00:20:44.747 "name": "Nvme0n1", 00:20:44.747 "num_blocks": 38912, 00:20:44.747 "product_name": "NVMe disk", 00:20:44.747 "supported_io_types": { 00:20:44.747 "abort": true, 00:20:44.747 "compare": true, 00:20:44.747 "compare_and_write": true, 00:20:44.747 "flush": true, 00:20:44.747 "nvme_admin": true, 00:20:44.747 "nvme_io": true, 00:20:44.747 "read": true, 00:20:44.747 "reset": true, 00:20:44.747 "unmap": true, 00:20:44.747 "write": true, 00:20:44.747 "write_zeroes": true 00:20:44.747 }, 00:20:44.747 "uuid": "d4537b9c-f016-4767-964a-18ca6d49d86e", 00:20:44.747 "zoned": false 00:20:44.747 } 00:20:44.747 ] 00:20:44.747 13:31:02 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72605 00:20:44.747 13:31:02 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:44.747 13:31:02 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:45.005 Running I/O for 10 seconds... 00:20:45.940 Latency(us) 00:20:45.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:45.941 Nvme0n1 : 1.00 7344.00 28.69 0.00 0.00 0.00 0.00 0.00 00:20:45.941 =================================================================================================================== 00:20:45.941 Total : 7344.00 28.69 0.00 0.00 0.00 0.00 0.00 00:20:45.941 00:20:46.873 13:31:04 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:46.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:46.873 Nvme0n1 : 2.00 7223.50 28.22 0.00 0.00 0.00 0.00 0.00 00:20:46.873 =================================================================================================================== 00:20:46.873 Total : 7223.50 28.22 0.00 0.00 0.00 0.00 0.00 00:20:46.873 00:20:47.132 true 00:20:47.132 13:31:04 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:47.132 13:31:04 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:47.389 13:31:04 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:47.389 13:31:04 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:47.389 13:31:04 -- target/nvmf_lvs_grow.sh@65 -- # wait 72605 00:20:47.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:47.954 Nvme0n1 : 3.00 7286.67 28.46 0.00 0.00 0.00 0.00 0.00 00:20:47.954 =================================================================================================================== 00:20:47.954 Total : 7286.67 28.46 0.00 0.00 0.00 0.00 0.00 00:20:47.954 00:20:48.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:48.888 Nvme0n1 : 4.00 7359.25 28.75 0.00 0.00 0.00 0.00 0.00 00:20:48.888 =================================================================================================================== 00:20:48.888 Total : 7359.25 28.75 0.00 0.00 0.00 0.00 0.00 00:20:48.888 00:20:49.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:49.884 Nvme0n1 : 5.00 7494.80 29.28 0.00 0.00 0.00 0.00 0.00 00:20:49.884 =================================================================================================================== 00:20:49.884 Total : 7494.80 29.28 0.00 0.00 0.00 0.00 0.00 00:20:49.884 00:20:50.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:50.820 Nvme0n1 : 6.00 7551.33 29.50 0.00 0.00 0.00 0.00 0.00 00:20:50.820 =================================================================================================================== 00:20:50.820 Total : 7551.33 29.50 0.00 0.00 0.00 0.00 0.00 00:20:50.820 00:20:51.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:51.756 Nvme0n1 : 7.00 7557.14 29.52 0.00 0.00 0.00 0.00 0.00 00:20:51.756 =================================================================================================================== 00:20:51.756 Total : 7557.14 29.52 0.00 0.00 0.00 0.00 0.00 00:20:51.756 00:20:53.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:53.131 Nvme0n1 : 8.00 7596.50 29.67 0.00 0.00 0.00 0.00 0.00 00:20:53.131 =================================================================================================================== 00:20:53.131 Total : 7596.50 29.67 0.00 0.00 0.00 0.00 0.00 00:20:53.131 00:20:54.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:54.065 Nvme0n1 : 9.00 7629.78 29.80 0.00 0.00 0.00 0.00 0.00 00:20:54.065 =================================================================================================================== 00:20:54.065 Total : 7629.78 29.80 0.00 0.00 0.00 0.00 0.00 00:20:54.065 00:20:55.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:55.012 Nvme0n1 : 10.00 7652.80 29.89 0.00 0.00 0.00 0.00 0.00 00:20:55.012 =================================================================================================================== 00:20:55.012 Total : 7652.80 29.89 0.00 0.00 0.00 0.00 0.00 00:20:55.012 00:20:55.012 00:20:55.012 Latency(us) 00:20:55.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:55.012 Nvme0n1 : 10.01 7658.59 29.92 0.00 0.00 16707.69 7923.90 44802.79 00:20:55.012 =================================================================================================================== 00:20:55.012 Total : 7658.59 29.92 0.00 0.00 16707.69 7923.90 44802.79 00:20:55.012 0 00:20:55.012 13:31:12 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72552 00:20:55.012 13:31:12 -- common/autotest_common.sh@936 -- # '[' -z 72552 ']' 00:20:55.012 13:31:12 -- common/autotest_common.sh@940 -- # kill -0 72552 00:20:55.012 13:31:12 -- common/autotest_common.sh@941 -- # uname 00:20:55.012 13:31:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.012 13:31:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72552 00:20:55.012 killing process with pid 72552 00:20:55.012 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.012 00:20:55.012 Latency(us) 00:20:55.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.012 =================================================================================================================== 00:20:55.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:55.012 13:31:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:55.012 13:31:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:55.012 13:31:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72552' 00:20:55.012 13:31:12 -- common/autotest_common.sh@955 -- # kill 72552 00:20:55.012 13:31:12 -- common/autotest_common.sh@960 -- # wait 72552 00:20:55.270 13:31:12 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:55.529 13:31:12 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:55.529 13:31:12 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:20:55.786 13:31:13 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:20:55.786 13:31:13 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:20:55.786 13:31:13 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:56.043 [2024-04-26 13:31:13.298540] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:56.043 13:31:13 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:56.043 13:31:13 -- common/autotest_common.sh@638 -- # local es=0 00:20:56.043 13:31:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:56.043 13:31:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.043 13:31:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:56.043 13:31:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.043 13:31:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:56.043 13:31:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.043 13:31:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:56.043 13:31:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:56.043 13:31:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:56.043 13:31:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:56.300 2024/04/26 13:31:13 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:dbca1b28-140f-4f42-8b2b-114cb897d860], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:20:56.300 request: 00:20:56.300 { 00:20:56.300 "method": "bdev_lvol_get_lvstores", 00:20:56.300 "params": { 00:20:56.300 "uuid": "dbca1b28-140f-4f42-8b2b-114cb897d860" 00:20:56.300 } 00:20:56.300 } 00:20:56.300 Got JSON-RPC error response 00:20:56.300 GoRPCClient: error on JSON-RPC call 00:20:56.300 13:31:13 -- common/autotest_common.sh@641 -- # es=1 00:20:56.300 13:31:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:56.300 13:31:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:56.300 13:31:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:56.300 13:31:13 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:56.558 aio_bdev 00:20:56.558 13:31:13 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d4537b9c-f016-4767-964a-18ca6d49d86e 00:20:56.558 13:31:13 -- common/autotest_common.sh@885 -- # local bdev_name=d4537b9c-f016-4767-964a-18ca6d49d86e 00:20:56.558 13:31:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:20:56.558 13:31:13 -- common/autotest_common.sh@887 -- # local i 00:20:56.558 13:31:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:20:56.558 13:31:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:20:56.558 13:31:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:56.816 13:31:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d4537b9c-f016-4767-964a-18ca6d49d86e -t 2000 00:20:57.074 [ 00:20:57.074 { 00:20:57.074 "aliases": [ 00:20:57.074 "lvs/lvol" 00:20:57.074 ], 00:20:57.074 "assigned_rate_limits": { 00:20:57.074 "r_mbytes_per_sec": 0, 00:20:57.074 "rw_ios_per_sec": 0, 00:20:57.074 "rw_mbytes_per_sec": 0, 00:20:57.074 "w_mbytes_per_sec": 0 00:20:57.074 }, 00:20:57.074 "block_size": 4096, 00:20:57.074 "claimed": false, 00:20:57.074 "driver_specific": { 00:20:57.074 "lvol": { 00:20:57.074 "base_bdev": "aio_bdev", 00:20:57.074 "clone": false, 00:20:57.074 "esnap_clone": false, 00:20:57.074 "lvol_store_uuid": "dbca1b28-140f-4f42-8b2b-114cb897d860", 00:20:57.074 "snapshot": false, 00:20:57.074 "thin_provision": false 00:20:57.074 } 00:20:57.074 }, 00:20:57.074 "name": "d4537b9c-f016-4767-964a-18ca6d49d86e", 00:20:57.074 "num_blocks": 38912, 00:20:57.074 "product_name": "Logical Volume", 00:20:57.074 "supported_io_types": { 00:20:57.074 "abort": false, 00:20:57.074 "compare": false, 00:20:57.074 "compare_and_write": false, 00:20:57.074 "flush": false, 00:20:57.074 "nvme_admin": false, 00:20:57.074 "nvme_io": false, 00:20:57.074 "read": true, 00:20:57.074 "reset": true, 00:20:57.074 "unmap": true, 00:20:57.074 "write": true, 00:20:57.074 "write_zeroes": true 00:20:57.074 }, 00:20:57.074 "uuid": "d4537b9c-f016-4767-964a-18ca6d49d86e", 00:20:57.074 "zoned": false 00:20:57.074 } 00:20:57.074 ] 00:20:57.074 13:31:14 -- common/autotest_common.sh@893 -- # return 0 00:20:57.074 13:31:14 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:57.074 13:31:14 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:20:57.331 13:31:14 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:20:57.331 13:31:14 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:57.331 13:31:14 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:20:57.588 13:31:14 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:20:57.588 13:31:14 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d4537b9c-f016-4767-964a-18ca6d49d86e 00:20:57.846 13:31:15 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dbca1b28-140f-4f42-8b2b-114cb897d860 00:20:58.103 13:31:15 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:58.361 13:31:15 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:58.618 ************************************ 00:20:58.618 END TEST lvs_grow_clean 00:20:58.618 ************************************ 00:20:58.619 00:20:58.619 real 0m18.431s 00:20:58.619 user 0m17.759s 00:20:58.619 sys 0m2.374s 00:20:58.619 13:31:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:58.619 13:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:58.877 13:31:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:58.877 13:31:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:58.877 13:31:16 -- common/autotest_common.sh@10 -- # set +x 00:20:58.877 ************************************ 00:20:58.877 START TEST lvs_grow_dirty 00:20:58.877 ************************************ 00:20:58.877 13:31:16 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:58.877 13:31:16 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:59.134 13:31:16 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:59.134 13:31:16 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:59.391 13:31:16 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:20:59.391 13:31:16 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:59.391 13:31:16 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:20:59.650 13:31:16 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:59.650 13:31:16 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:59.650 13:31:16 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c7c60212-be9e-4eec-a89f-0f616a4f94bf lvol 150 00:20:59.908 13:31:17 -- target/nvmf_lvs_grow.sh@33 -- # lvol=cc69935d-b3b3-4daf-9234-35bcc21734c5 00:20:59.908 13:31:17 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:59.908 13:31:17 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:21:00.173 [2024-04-26 13:31:17.453766] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:21:00.174 [2024-04-26 13:31:17.453870] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:21:00.174 true 00:21:00.174 13:31:17 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:21:00.174 13:31:17 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:00.438 13:31:17 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:21:00.438 13:31:17 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:00.696 13:31:18 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cc69935d-b3b3-4daf-9234-35bcc21734c5 00:21:00.955 13:31:18 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.213 13:31:18 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:01.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.472 13:31:18 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72995 00:21:01.472 13:31:18 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:21:01.472 13:31:18 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.472 13:31:18 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72995 /var/tmp/bdevperf.sock 00:21:01.472 13:31:18 -- common/autotest_common.sh@817 -- # '[' -z 72995 ']' 00:21:01.472 13:31:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.472 13:31:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:01.472 13:31:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.472 13:31:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:01.472 13:31:18 -- common/autotest_common.sh@10 -- # set +x 00:21:01.473 [2024-04-26 13:31:18.862612] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:01.473 [2024-04-26 13:31:18.862708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ] 00:21:01.731 [2024-04-26 13:31:18.991320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.731 [2024-04-26 13:31:19.103561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.673 13:31:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:02.673 13:31:19 -- common/autotest_common.sh@850 -- # return 0 00:21:02.673 13:31:19 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:21:02.944 Nvme0n1 00:21:02.945 13:31:20 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:21:03.203 [ 00:21:03.203 { 00:21:03.203 "aliases": [ 00:21:03.203 "cc69935d-b3b3-4daf-9234-35bcc21734c5" 00:21:03.203 ], 00:21:03.203 "assigned_rate_limits": { 00:21:03.203 "r_mbytes_per_sec": 0, 00:21:03.203 "rw_ios_per_sec": 0, 00:21:03.203 "rw_mbytes_per_sec": 0, 00:21:03.203 "w_mbytes_per_sec": 0 00:21:03.203 }, 00:21:03.203 "block_size": 4096, 00:21:03.203 "claimed": false, 00:21:03.203 "driver_specific": { 00:21:03.203 "mp_policy": "active_passive", 00:21:03.203 "nvme": [ 00:21:03.203 { 00:21:03.203 "ctrlr_data": { 00:21:03.203 "ana_reporting": false, 00:21:03.203 "cntlid": 1, 00:21:03.203 "firmware_revision": "24.05", 00:21:03.203 "model_number": "SPDK bdev Controller", 00:21:03.203 "multi_ctrlr": true, 00:21:03.203 "oacs": { 00:21:03.203 "firmware": 0, 00:21:03.203 "format": 0, 00:21:03.203 "ns_manage": 0, 00:21:03.203 "security": 0 00:21:03.203 }, 00:21:03.203 "serial_number": "SPDK0", 00:21:03.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.203 "vendor_id": "0x8086" 00:21:03.203 }, 00:21:03.203 "ns_data": { 00:21:03.203 "can_share": true, 00:21:03.203 "id": 1 00:21:03.203 }, 00:21:03.203 "trid": { 00:21:03.203 "adrfam": "IPv4", 00:21:03.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.203 "traddr": "10.0.0.2", 00:21:03.203 "trsvcid": "4420", 00:21:03.203 "trtype": "TCP" 00:21:03.203 }, 00:21:03.203 "vs": { 00:21:03.203 "nvme_version": "1.3" 00:21:03.203 } 00:21:03.203 } 00:21:03.203 ] 00:21:03.203 }, 00:21:03.203 "memory_domains": [ 00:21:03.203 { 00:21:03.203 "dma_device_id": "system", 00:21:03.203 "dma_device_type": 1 00:21:03.203 } 00:21:03.203 ], 00:21:03.203 "name": "Nvme0n1", 00:21:03.203 "num_blocks": 38912, 00:21:03.203 "product_name": "NVMe disk", 00:21:03.203 "supported_io_types": { 00:21:03.203 "abort": true, 00:21:03.203 "compare": true, 00:21:03.203 "compare_and_write": true, 00:21:03.203 "flush": true, 00:21:03.203 "nvme_admin": true, 00:21:03.203 "nvme_io": true, 00:21:03.203 "read": true, 00:21:03.203 "reset": true, 00:21:03.203 "unmap": true, 00:21:03.203 "write": true, 00:21:03.203 "write_zeroes": true 00:21:03.203 }, 00:21:03.203 "uuid": "cc69935d-b3b3-4daf-9234-35bcc21734c5", 00:21:03.203 "zoned": false 00:21:03.203 } 00:21:03.203 ] 00:21:03.203 13:31:20 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.203 13:31:20 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73043 00:21:03.203 13:31:20 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:21:03.203 Running I/O for 10 seconds... 00:21:04.137 Latency(us) 00:21:04.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:04.137 Nvme0n1 : 1.00 8298.00 32.41 0.00 0.00 0.00 0.00 0.00 00:21:04.137 =================================================================================================================== 00:21:04.137 Total : 8298.00 32.41 0.00 0.00 0.00 0.00 0.00 00:21:04.137 00:21:05.071 13:31:22 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:05.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:05.330 Nvme0n1 : 2.00 8336.50 32.56 0.00 0.00 0.00 0.00 0.00 00:21:05.330 =================================================================================================================== 00:21:05.330 Total : 8336.50 32.56 0.00 0.00 0.00 0.00 0.00 00:21:05.330 00:21:05.589 true 00:21:05.589 13:31:22 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:05.589 13:31:22 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:21:05.847 13:31:23 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:21:05.847 13:31:23 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:21:05.847 13:31:23 -- target/nvmf_lvs_grow.sh@65 -- # wait 73043 00:21:06.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:06.414 Nvme0n1 : 3.00 8390.33 32.77 0.00 0.00 0.00 0.00 0.00 00:21:06.414 =================================================================================================================== 00:21:06.414 Total : 8390.33 32.77 0.00 0.00 0.00 0.00 0.00 00:21:06.414 00:21:07.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:07.348 Nvme0n1 : 4.00 8432.25 32.94 0.00 0.00 0.00 0.00 0.00 00:21:07.348 =================================================================================================================== 00:21:07.348 Total : 8432.25 32.94 0.00 0.00 0.00 0.00 0.00 00:21:07.348 00:21:08.320 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:08.320 Nvme0n1 : 5.00 8417.00 32.88 0.00 0.00 0.00 0.00 0.00 00:21:08.320 =================================================================================================================== 00:21:08.320 Total : 8417.00 32.88 0.00 0.00 0.00 0.00 0.00 00:21:08.320 00:21:09.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:09.254 Nvme0n1 : 6.00 8417.67 32.88 0.00 0.00 0.00 0.00 0.00 00:21:09.254 =================================================================================================================== 00:21:09.254 Total : 8417.67 32.88 0.00 0.00 0.00 0.00 0.00 00:21:09.254 00:21:10.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:10.193 Nvme0n1 : 7.00 8254.71 32.24 0.00 0.00 0.00 0.00 0.00 00:21:10.193 =================================================================================================================== 00:21:10.193 Total : 8254.71 32.24 0.00 0.00 0.00 0.00 0.00 00:21:10.193 00:21:11.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:11.566 Nvme0n1 : 8.00 8211.12 32.07 0.00 0.00 0.00 0.00 0.00 00:21:11.566 =================================================================================================================== 00:21:11.566 Total : 8211.12 32.07 0.00 0.00 0.00 0.00 0.00 00:21:11.566 00:21:12.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:12.133 Nvme0n1 : 9.00 8180.67 31.96 0.00 0.00 0.00 0.00 0.00 00:21:12.133 =================================================================================================================== 00:21:12.133 Total : 8180.67 31.96 0.00 0.00 0.00 0.00 0.00 00:21:12.133 00:21:13.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:13.536 Nvme0n1 : 10.00 8153.10 31.85 0.00 0.00 0.00 0.00 0.00 00:21:13.536 =================================================================================================================== 00:21:13.536 Total : 8153.10 31.85 0.00 0.00 0.00 0.00 0.00 00:21:13.536 00:21:13.536 00:21:13.536 Latency(us) 00:21:13.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:13.536 Nvme0n1 : 10.01 8159.47 31.87 0.00 0.00 15682.30 6196.13 105334.23 00:21:13.536 =================================================================================================================== 00:21:13.536 Total : 8159.47 31.87 0.00 0.00 15682.30 6196.13 105334.23 00:21:13.536 0 00:21:13.537 13:31:30 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72995 00:21:13.537 13:31:30 -- common/autotest_common.sh@936 -- # '[' -z 72995 ']' 00:21:13.537 13:31:30 -- common/autotest_common.sh@940 -- # kill -0 72995 00:21:13.537 13:31:30 -- common/autotest_common.sh@941 -- # uname 00:21:13.537 13:31:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:13.537 13:31:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72995 00:21:13.537 13:31:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:13.537 13:31:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:13.537 13:31:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72995' 00:21:13.537 killing process with pid 72995 00:21:13.537 13:31:30 -- common/autotest_common.sh@955 -- # kill 72995 00:21:13.537 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.537 00:21:13.537 Latency(us) 00:21:13.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.537 =================================================================================================================== 00:21:13.537 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.537 13:31:30 -- common/autotest_common.sh@960 -- # wait 72995 00:21:13.537 13:31:30 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:13.795 13:31:31 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:13.795 13:31:31 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:21:14.053 13:31:31 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:21:14.053 13:31:31 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:21:14.053 13:31:31 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72379 00:21:14.053 13:31:31 -- target/nvmf_lvs_grow.sh@74 -- # wait 72379 00:21:14.312 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72379 Killed "${NVMF_APP[@]}" "$@" 00:21:14.312 13:31:31 -- target/nvmf_lvs_grow.sh@74 -- # true 00:21:14.312 13:31:31 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:21:14.312 13:31:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:14.312 13:31:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:14.312 13:31:31 -- common/autotest_common.sh@10 -- # set +x 00:21:14.312 13:31:31 -- nvmf/common.sh@470 -- # nvmfpid=73199 00:21:14.312 13:31:31 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:14.312 13:31:31 -- nvmf/common.sh@471 -- # waitforlisten 73199 00:21:14.312 13:31:31 -- common/autotest_common.sh@817 -- # '[' -z 73199 ']' 00:21:14.312 13:31:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.312 13:31:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:14.312 13:31:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.312 13:31:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:14.312 13:31:31 -- common/autotest_common.sh@10 -- # set +x 00:21:14.312 [2024-04-26 13:31:31.593319] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:14.312 [2024-04-26 13:31:31.594200] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.312 [2024-04-26 13:31:31.730664] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.570 [2024-04-26 13:31:31.845421] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.570 [2024-04-26 13:31:31.845477] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.570 [2024-04-26 13:31:31.845490] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.570 [2024-04-26 13:31:31.845499] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.570 [2024-04-26 13:31:31.845506] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.570 [2024-04-26 13:31:31.845538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.504 13:31:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:15.504 13:31:32 -- common/autotest_common.sh@850 -- # return 0 00:21:15.504 13:31:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:15.504 13:31:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:15.504 13:31:32 -- common/autotest_common.sh@10 -- # set +x 00:21:15.504 13:31:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.504 13:31:32 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:15.504 [2024-04-26 13:31:32.939930] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:15.504 [2024-04-26 13:31:32.940192] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:15.504 [2024-04-26 13:31:32.940338] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:15.762 13:31:32 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:21:15.762 13:31:32 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev cc69935d-b3b3-4daf-9234-35bcc21734c5 00:21:15.762 13:31:32 -- common/autotest_common.sh@885 -- # local bdev_name=cc69935d-b3b3-4daf-9234-35bcc21734c5 00:21:15.762 13:31:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:15.762 13:31:32 -- common/autotest_common.sh@887 -- # local i 00:21:15.762 13:31:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:15.762 13:31:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:15.762 13:31:32 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:16.019 13:31:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc69935d-b3b3-4daf-9234-35bcc21734c5 -t 2000 00:21:16.277 [ 00:21:16.277 { 00:21:16.277 "aliases": [ 00:21:16.277 "lvs/lvol" 00:21:16.277 ], 00:21:16.277 "assigned_rate_limits": { 00:21:16.277 "r_mbytes_per_sec": 0, 00:21:16.277 "rw_ios_per_sec": 0, 00:21:16.277 "rw_mbytes_per_sec": 0, 00:21:16.277 "w_mbytes_per_sec": 0 00:21:16.277 }, 00:21:16.277 "block_size": 4096, 00:21:16.277 "claimed": false, 00:21:16.277 "driver_specific": { 00:21:16.277 "lvol": { 00:21:16.277 "base_bdev": "aio_bdev", 00:21:16.277 "clone": false, 00:21:16.277 "esnap_clone": false, 00:21:16.277 "lvol_store_uuid": "c7c60212-be9e-4eec-a89f-0f616a4f94bf", 00:21:16.277 "snapshot": false, 00:21:16.277 "thin_provision": false 00:21:16.277 } 00:21:16.277 }, 00:21:16.277 "name": "cc69935d-b3b3-4daf-9234-35bcc21734c5", 00:21:16.277 "num_blocks": 38912, 00:21:16.277 "product_name": "Logical Volume", 00:21:16.277 "supported_io_types": { 00:21:16.277 "abort": false, 00:21:16.277 "compare": false, 00:21:16.277 "compare_and_write": false, 00:21:16.277 "flush": false, 00:21:16.277 "nvme_admin": false, 00:21:16.277 "nvme_io": false, 00:21:16.277 "read": true, 00:21:16.277 "reset": true, 00:21:16.277 "unmap": true, 00:21:16.277 "write": true, 00:21:16.277 "write_zeroes": true 00:21:16.277 }, 00:21:16.277 "uuid": "cc69935d-b3b3-4daf-9234-35bcc21734c5", 00:21:16.277 "zoned": false 00:21:16.277 } 00:21:16.277 ] 00:21:16.277 13:31:33 -- common/autotest_common.sh@893 -- # return 0 00:21:16.277 13:31:33 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:16.277 13:31:33 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:21:16.535 13:31:33 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:21:16.535 13:31:33 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:21:16.535 13:31:33 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:16.793 13:31:34 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:21:16.793 13:31:34 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:17.052 [2024-04-26 13:31:34.437067] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:17.052 13:31:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:17.052 13:31:34 -- common/autotest_common.sh@638 -- # local es=0 00:21:17.052 13:31:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:17.052 13:31:34 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.052 13:31:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.052 13:31:34 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.052 13:31:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.052 13:31:34 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.052 13:31:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:17.052 13:31:34 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:17.052 13:31:34 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:17.052 13:31:34 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:17.310 2024/04/26 13:31:34 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c7c60212-be9e-4eec-a89f-0f616a4f94bf], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:21:17.310 request: 00:21:17.310 { 00:21:17.310 "method": "bdev_lvol_get_lvstores", 00:21:17.310 "params": { 00:21:17.310 "uuid": "c7c60212-be9e-4eec-a89f-0f616a4f94bf" 00:21:17.310 } 00:21:17.310 } 00:21:17.310 Got JSON-RPC error response 00:21:17.310 GoRPCClient: error on JSON-RPC call 00:21:17.568 13:31:34 -- common/autotest_common.sh@641 -- # es=1 00:21:17.568 13:31:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:17.568 13:31:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:17.568 13:31:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:17.568 13:31:34 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:17.826 aio_bdev 00:21:17.826 13:31:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev cc69935d-b3b3-4daf-9234-35bcc21734c5 00:21:17.826 13:31:35 -- common/autotest_common.sh@885 -- # local bdev_name=cc69935d-b3b3-4daf-9234-35bcc21734c5 00:21:17.826 13:31:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:21:17.826 13:31:35 -- common/autotest_common.sh@887 -- # local i 00:21:17.826 13:31:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:21:17.826 13:31:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:21:17.826 13:31:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:18.085 13:31:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc69935d-b3b3-4daf-9234-35bcc21734c5 -t 2000 00:21:18.404 [ 00:21:18.404 { 00:21:18.404 "aliases": [ 00:21:18.404 "lvs/lvol" 00:21:18.404 ], 00:21:18.404 "assigned_rate_limits": { 00:21:18.404 "r_mbytes_per_sec": 0, 00:21:18.404 "rw_ios_per_sec": 0, 00:21:18.404 "rw_mbytes_per_sec": 0, 00:21:18.404 "w_mbytes_per_sec": 0 00:21:18.404 }, 00:21:18.404 "block_size": 4096, 00:21:18.404 "claimed": false, 00:21:18.404 "driver_specific": { 00:21:18.404 "lvol": { 00:21:18.404 "base_bdev": "aio_bdev", 00:21:18.404 "clone": false, 00:21:18.404 "esnap_clone": false, 00:21:18.404 "lvol_store_uuid": "c7c60212-be9e-4eec-a89f-0f616a4f94bf", 00:21:18.404 "snapshot": false, 00:21:18.404 "thin_provision": false 00:21:18.404 } 00:21:18.404 }, 00:21:18.404 "name": "cc69935d-b3b3-4daf-9234-35bcc21734c5", 00:21:18.404 "num_blocks": 38912, 00:21:18.404 "product_name": "Logical Volume", 00:21:18.404 "supported_io_types": { 00:21:18.404 "abort": false, 00:21:18.404 "compare": false, 00:21:18.404 "compare_and_write": false, 00:21:18.404 "flush": false, 00:21:18.404 "nvme_admin": false, 00:21:18.404 "nvme_io": false, 00:21:18.404 "read": true, 00:21:18.404 "reset": true, 00:21:18.404 "unmap": true, 00:21:18.404 "write": true, 00:21:18.404 "write_zeroes": true 00:21:18.404 }, 00:21:18.404 "uuid": "cc69935d-b3b3-4daf-9234-35bcc21734c5", 00:21:18.404 "zoned": false 00:21:18.404 } 00:21:18.404 ] 00:21:18.404 13:31:35 -- common/autotest_common.sh@893 -- # return 0 00:21:18.404 13:31:35 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:18.404 13:31:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:21:18.662 13:31:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:21:18.662 13:31:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:21:18.662 13:31:35 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:18.920 13:31:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:21:18.920 13:31:36 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cc69935d-b3b3-4daf-9234-35bcc21734c5 00:21:19.177 13:31:36 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7c60212-be9e-4eec-a89f-0f616a4f94bf 00:21:19.434 13:31:36 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:19.691 13:31:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:20.257 ************************************ 00:21:20.257 END TEST lvs_grow_dirty 00:21:20.257 ************************************ 00:21:20.257 00:21:20.257 real 0m21.290s 00:21:20.257 user 0m44.031s 00:21:20.257 sys 0m7.973s 00:21:20.257 13:31:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:20.257 13:31:37 -- common/autotest_common.sh@10 -- # set +x 00:21:20.257 13:31:37 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:20.257 13:31:37 -- common/autotest_common.sh@794 -- # type=--id 00:21:20.257 13:31:37 -- common/autotest_common.sh@795 -- # id=0 00:21:20.257 13:31:37 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:21:20.257 13:31:37 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:20.257 13:31:37 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:21:20.257 13:31:37 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:21:20.257 13:31:37 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:21:20.257 13:31:37 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:20.257 nvmf_trace.0 00:21:20.257 13:31:37 -- common/autotest_common.sh@809 -- # return 0 00:21:20.257 13:31:37 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:20.257 13:31:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:20.257 13:31:37 -- nvmf/common.sh@117 -- # sync 00:21:20.516 13:31:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:20.516 13:31:37 -- nvmf/common.sh@120 -- # set +e 00:21:20.516 13:31:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:20.516 13:31:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:20.516 rmmod nvme_tcp 00:21:20.516 rmmod nvme_fabrics 00:21:20.516 rmmod nvme_keyring 00:21:20.516 13:31:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:20.516 13:31:37 -- nvmf/common.sh@124 -- # set -e 00:21:20.516 13:31:37 -- nvmf/common.sh@125 -- # return 0 00:21:20.516 13:31:37 -- nvmf/common.sh@478 -- # '[' -n 73199 ']' 00:21:20.516 13:31:37 -- nvmf/common.sh@479 -- # killprocess 73199 00:21:20.516 13:31:37 -- common/autotest_common.sh@936 -- # '[' -z 73199 ']' 00:21:20.516 13:31:37 -- common/autotest_common.sh@940 -- # kill -0 73199 00:21:20.516 13:31:37 -- common/autotest_common.sh@941 -- # uname 00:21:20.516 13:31:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:20.516 13:31:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73199 00:21:20.516 13:31:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:20.516 13:31:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:20.516 killing process with pid 73199 00:21:20.516 13:31:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73199' 00:21:20.516 13:31:37 -- common/autotest_common.sh@955 -- # kill 73199 00:21:20.516 13:31:37 -- common/autotest_common.sh@960 -- # wait 73199 00:21:20.774 13:31:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:20.774 13:31:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:20.774 13:31:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:20.774 13:31:38 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.774 13:31:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:20.774 13:31:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.774 13:31:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.774 13:31:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.774 13:31:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:20.774 00:21:20.774 real 0m42.504s 00:21:20.774 user 1m9.016s 00:21:20.774 sys 0m11.172s 00:21:20.774 13:31:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:20.774 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:21:20.774 ************************************ 00:21:20.774 END TEST nvmf_lvs_grow 00:21:20.774 ************************************ 00:21:20.774 13:31:38 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:20.774 13:31:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:20.774 13:31:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:20.774 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:21:20.774 ************************************ 00:21:20.774 START TEST nvmf_bdev_io_wait 00:21:20.774 ************************************ 00:21:20.774 13:31:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:21.033 * Looking for test storage... 00:21:21.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:21.033 13:31:38 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:21.033 13:31:38 -- nvmf/common.sh@7 -- # uname -s 00:21:21.033 13:31:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.033 13:31:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.033 13:31:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.033 13:31:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.033 13:31:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.033 13:31:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.033 13:31:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.033 13:31:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.033 13:31:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.033 13:31:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.033 13:31:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:21:21.033 13:31:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:21:21.033 13:31:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.033 13:31:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.033 13:31:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:21.033 13:31:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.033 13:31:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:21.033 13:31:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.033 13:31:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.033 13:31:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.033 13:31:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.033 13:31:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.033 13:31:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.033 13:31:38 -- paths/export.sh@5 -- # export PATH 00:21:21.033 13:31:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.033 13:31:38 -- nvmf/common.sh@47 -- # : 0 00:21:21.033 13:31:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.033 13:31:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.033 13:31:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.033 13:31:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.033 13:31:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.033 13:31:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.033 13:31:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.033 13:31:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.033 13:31:38 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.033 13:31:38 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.033 13:31:38 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:21.033 13:31:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:21.033 13:31:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.033 13:31:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:21.033 13:31:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:21.033 13:31:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:21.033 13:31:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.033 13:31:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.033 13:31:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.033 13:31:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:21.033 13:31:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:21.033 13:31:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:21.034 13:31:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:21.034 13:31:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:21.034 13:31:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:21.034 13:31:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.034 13:31:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.034 13:31:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:21.034 13:31:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:21.034 13:31:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:21.034 13:31:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:21.034 13:31:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:21.034 13:31:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.034 13:31:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:21.034 13:31:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:21.034 13:31:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:21.034 13:31:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:21.034 13:31:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:21.034 13:31:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:21.034 Cannot find device "nvmf_tgt_br" 00:21:21.034 13:31:38 -- nvmf/common.sh@155 -- # true 00:21:21.034 13:31:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.034 Cannot find device "nvmf_tgt_br2" 00:21:21.034 13:31:38 -- nvmf/common.sh@156 -- # true 00:21:21.034 13:31:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:21.034 13:31:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:21.034 Cannot find device "nvmf_tgt_br" 00:21:21.034 13:31:38 -- nvmf/common.sh@158 -- # true 00:21:21.034 13:31:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:21.034 Cannot find device "nvmf_tgt_br2" 00:21:21.034 13:31:38 -- nvmf/common.sh@159 -- # true 00:21:21.034 13:31:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:21.034 13:31:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:21.034 13:31:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.034 13:31:38 -- nvmf/common.sh@162 -- # true 00:21:21.034 13:31:38 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.034 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.034 13:31:38 -- nvmf/common.sh@163 -- # true 00:21:21.034 13:31:38 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:21.034 13:31:38 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:21.293 13:31:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:21.293 13:31:38 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:21.293 13:31:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:21.293 13:31:38 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:21.293 13:31:38 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:21.293 13:31:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:21.293 13:31:38 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:21.293 13:31:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:21.293 13:31:38 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:21.293 13:31:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:21.293 13:31:38 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:21.293 13:31:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:21.293 13:31:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:21.293 13:31:38 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:21.293 13:31:38 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:21.293 13:31:38 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:21.293 13:31:38 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:21.293 13:31:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:21.293 13:31:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:21.293 13:31:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:21.293 13:31:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:21.293 13:31:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:21.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:21:21.293 00:21:21.293 --- 10.0.0.2 ping statistics --- 00:21:21.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.293 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:21.293 13:31:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:21.293 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:21.293 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:21:21.293 00:21:21.293 --- 10.0.0.3 ping statistics --- 00:21:21.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.293 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:21.293 13:31:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:21.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:21.293 00:21:21.293 --- 10.0.0.1 ping statistics --- 00:21:21.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.293 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:21.293 13:31:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.293 13:31:38 -- nvmf/common.sh@422 -- # return 0 00:21:21.293 13:31:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:21.293 13:31:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.293 13:31:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:21.293 13:31:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:21.293 13:31:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.293 13:31:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:21.293 13:31:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:21.293 13:31:38 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:21.293 13:31:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:21.293 13:31:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:21.293 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.293 13:31:38 -- nvmf/common.sh@470 -- # nvmfpid=73631 00:21:21.293 13:31:38 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:21.293 13:31:38 -- nvmf/common.sh@471 -- # waitforlisten 73631 00:21:21.293 13:31:38 -- common/autotest_common.sh@817 -- # '[' -z 73631 ']' 00:21:21.293 13:31:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.552 13:31:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:21.552 13:31:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.552 13:31:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:21.552 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:21:21.552 [2024-04-26 13:31:38.801986] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:21.552 [2024-04-26 13:31:38.802094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.552 [2024-04-26 13:31:38.942160] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.811 [2024-04-26 13:31:39.064098] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.811 [2024-04-26 13:31:39.064176] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.811 [2024-04-26 13:31:39.064189] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.811 [2024-04-26 13:31:39.064198] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.811 [2024-04-26 13:31:39.064205] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.811 [2024-04-26 13:31:39.064382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.811 [2024-04-26 13:31:39.064619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.811 [2024-04-26 13:31:39.065371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.811 [2024-04-26 13:31:39.065416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.746 13:31:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:22.746 13:31:39 -- common/autotest_common.sh@850 -- # return 0 00:21:22.746 13:31:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:22.746 13:31:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:22.746 13:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 13:31:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.746 13:31:39 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:22.746 13:31:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.746 13:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 13:31:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.746 13:31:39 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:22.746 13:31:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.746 13:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 13:31:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.746 13:31:39 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:22.746 13:31:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.746 13:31:39 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 [2024-04-26 13:31:39.992482] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.746 13:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:22.746 13:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.746 13:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 Malloc0 00:21:22.746 13:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.746 13:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.746 13:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 13:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.746 13:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.746 13:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 13:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.746 13:31:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:22.746 13:31:40 -- common/autotest_common.sh@10 -- # set +x 00:21:22.746 [2024-04-26 13:31:40.056140] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.746 13:31:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73684 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # config=() 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@30 -- # READ_PID=73686 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # local subsystem config 00:21:22.746 13:31:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:22.746 13:31:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:22.746 { 00:21:22.746 "params": { 00:21:22.746 "name": "Nvme$subsystem", 00:21:22.746 "trtype": "$TEST_TRANSPORT", 00:21:22.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.746 "adrfam": "ipv4", 00:21:22.746 "trsvcid": "$NVMF_PORT", 00:21:22.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.746 "hdgst": ${hdgst:-false}, 00:21:22.746 "ddgst": ${ddgst:-false} 00:21:22.746 }, 00:21:22.746 "method": "bdev_nvme_attach_controller" 00:21:22.746 } 00:21:22.746 EOF 00:21:22.746 )") 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # config=() 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # local subsystem config 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73688 00:21:22.746 13:31:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:22.746 13:31:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:22.746 { 00:21:22.746 "params": { 00:21:22.746 "name": "Nvme$subsystem", 00:21:22.746 "trtype": "$TEST_TRANSPORT", 00:21:22.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.746 "adrfam": "ipv4", 00:21:22.746 "trsvcid": "$NVMF_PORT", 00:21:22.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.746 "hdgst": ${hdgst:-false}, 00:21:22.746 "ddgst": ${ddgst:-false} 00:21:22.746 }, 00:21:22.746 "method": "bdev_nvme_attach_controller" 00:21:22.746 } 00:21:22.746 EOF 00:21:22.746 )") 00:21:22.746 13:31:40 -- nvmf/common.sh@543 -- # cat 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73690 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@35 -- # sync 00:21:22.746 13:31:40 -- nvmf/common.sh@543 -- # cat 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:21:22.746 13:31:40 -- nvmf/common.sh@545 -- # jq . 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # config=() 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # local subsystem config 00:21:22.746 13:31:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:22.746 13:31:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:22.746 { 00:21:22.746 "params": { 00:21:22.746 "name": "Nvme$subsystem", 00:21:22.746 "trtype": "$TEST_TRANSPORT", 00:21:22.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.746 "adrfam": "ipv4", 00:21:22.746 "trsvcid": "$NVMF_PORT", 00:21:22.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.746 "hdgst": ${hdgst:-false}, 00:21:22.746 "ddgst": ${ddgst:-false} 00:21:22.746 }, 00:21:22.746 "method": "bdev_nvme_attach_controller" 00:21:22.746 } 00:21:22.746 EOF 00:21:22.746 )") 00:21:22.746 13:31:40 -- nvmf/common.sh@545 -- # jq . 00:21:22.746 13:31:40 -- nvmf/common.sh@543 -- # cat 00:21:22.746 13:31:40 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:21:22.746 13:31:40 -- nvmf/common.sh@546 -- # IFS=, 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # config=() 00:21:22.746 13:31:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:22.746 "params": { 00:21:22.746 "name": "Nvme1", 00:21:22.746 "trtype": "tcp", 00:21:22.746 "traddr": "10.0.0.2", 00:21:22.746 "adrfam": "ipv4", 00:21:22.746 "trsvcid": "4420", 00:21:22.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.746 "hdgst": false, 00:21:22.746 "ddgst": false 00:21:22.746 }, 00:21:22.746 "method": "bdev_nvme_attach_controller" 00:21:22.746 }' 00:21:22.746 13:31:40 -- nvmf/common.sh@521 -- # local subsystem config 00:21:22.746 13:31:40 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:22.747 13:31:40 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:22.747 { 00:21:22.747 "params": { 00:21:22.747 "name": "Nvme$subsystem", 00:21:22.747 "trtype": "$TEST_TRANSPORT", 00:21:22.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.747 "adrfam": "ipv4", 00:21:22.747 "trsvcid": "$NVMF_PORT", 00:21:22.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.747 "hdgst": ${hdgst:-false}, 00:21:22.747 "ddgst": ${ddgst:-false} 00:21:22.747 }, 00:21:22.747 "method": "bdev_nvme_attach_controller" 00:21:22.747 } 00:21:22.747 EOF 00:21:22.747 )") 00:21:22.747 13:31:40 -- nvmf/common.sh@546 -- # IFS=, 00:21:22.747 13:31:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:22.747 "params": { 00:21:22.747 "name": "Nvme1", 00:21:22.747 "trtype": "tcp", 00:21:22.747 "traddr": "10.0.0.2", 00:21:22.747 "adrfam": "ipv4", 00:21:22.747 "trsvcid": "4420", 00:21:22.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.747 "hdgst": false, 00:21:22.747 "ddgst": false 00:21:22.747 }, 00:21:22.747 "method": "bdev_nvme_attach_controller" 00:21:22.747 }' 00:21:22.747 13:31:40 -- nvmf/common.sh@545 -- # jq . 00:21:22.747 13:31:40 -- nvmf/common.sh@546 -- # IFS=, 00:21:22.747 13:31:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:22.747 "params": { 00:21:22.747 "name": "Nvme1", 00:21:22.747 "trtype": "tcp", 00:21:22.747 "traddr": "10.0.0.2", 00:21:22.747 "adrfam": "ipv4", 00:21:22.747 "trsvcid": "4420", 00:21:22.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.747 "hdgst": false, 00:21:22.747 "ddgst": false 00:21:22.747 }, 00:21:22.747 "method": "bdev_nvme_attach_controller" 00:21:22.747 }' 00:21:22.747 13:31:40 -- nvmf/common.sh@543 -- # cat 00:21:22.747 13:31:40 -- nvmf/common.sh@545 -- # jq . 00:21:22.747 13:31:40 -- nvmf/common.sh@546 -- # IFS=, 00:21:22.747 13:31:40 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:22.747 "params": { 00:21:22.747 "name": "Nvme1", 00:21:22.747 "trtype": "tcp", 00:21:22.747 "traddr": "10.0.0.2", 00:21:22.747 "adrfam": "ipv4", 00:21:22.747 "trsvcid": "4420", 00:21:22.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.747 "hdgst": false, 00:21:22.747 "ddgst": false 00:21:22.747 }, 00:21:22.747 "method": "bdev_nvme_attach_controller" 00:21:22.747 }' 00:21:22.747 13:31:40 -- target/bdev_io_wait.sh@37 -- # wait 73684 00:21:22.747 [2024-04-26 13:31:40.139278] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:22.747 [2024-04-26 13:31:40.139362] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:21:22.747 [2024-04-26 13:31:40.146976] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:22.747 [2024-04-26 13:31:40.147247] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:21:22.747 [2024-04-26 13:31:40.152630] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:22.747 [2024-04-26 13:31:40.152732] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:21:22.747 [2024-04-26 13:31:40.156606] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:22.747 [2024-04-26 13:31:40.157084] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:23.006 [2024-04-26 13:31:40.340975] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.006 [2024-04-26 13:31:40.430683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:23.006 [2024-04-26 13:31:40.447114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.265 [2024-04-26 13:31:40.487473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.265 [2024-04-26 13:31:40.556382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:23.265 [2024-04-26 13:31:40.572660] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.265 [2024-04-26 13:31:40.584932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:23.265 Running I/O for 1 seconds... 00:21:23.265 [2024-04-26 13:31:40.673060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.524 Running I/O for 1 seconds... 00:21:23.524 Running I/O for 1 seconds... 00:21:23.524 Running I/O for 1 seconds... 00:21:24.460 00:21:24.460 Latency(us) 00:21:24.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.460 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:21:24.460 Nvme1n1 : 1.00 192717.71 752.80 0.00 0.00 661.69 269.96 1668.19 00:21:24.460 =================================================================================================================== 00:21:24.460 Total : 192717.71 752.80 0.00 0.00 661.69 269.96 1668.19 00:21:24.460 00:21:24.460 Latency(us) 00:21:24.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.460 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:21:24.460 Nvme1n1 : 1.01 8526.53 33.31 0.00 0.00 14945.72 5749.29 21090.68 00:21:24.460 =================================================================================================================== 00:21:24.460 Total : 8526.53 33.31 0.00 0.00 14945.72 5749.29 21090.68 00:21:24.460 00:21:24.460 Latency(us) 00:21:24.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.460 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:21:24.460 Nvme1n1 : 1.01 8699.71 33.98 0.00 0.00 14646.69 8281.37 23950.43 00:21:24.460 =================================================================================================================== 00:21:24.460 Total : 8699.71 33.98 0.00 0.00 14646.69 8281.37 23950.43 00:21:24.460 00:21:24.460 Latency(us) 00:21:24.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.460 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:21:24.460 Nvme1n1 : 1.01 7887.61 30.81 0.00 0.00 16158.58 3023.59 27525.12 00:21:24.460 =================================================================================================================== 00:21:24.460 Total : 7887.61 30.81 0.00 0.00 16158.58 3023.59 27525.12 00:21:24.719 13:31:42 -- target/bdev_io_wait.sh@38 -- # wait 73686 00:21:24.719 13:31:42 -- target/bdev_io_wait.sh@39 -- # wait 73688 00:21:24.719 13:31:42 -- target/bdev_io_wait.sh@40 -- # wait 73690 00:21:24.719 13:31:42 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:24.719 13:31:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.719 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:21:24.719 13:31:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.719 13:31:42 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:21:24.719 13:31:42 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:21:24.719 13:31:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:24.719 13:31:42 -- nvmf/common.sh@117 -- # sync 00:21:24.977 13:31:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.977 13:31:42 -- nvmf/common.sh@120 -- # set +e 00:21:24.977 13:31:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.978 13:31:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.978 rmmod nvme_tcp 00:21:24.978 rmmod nvme_fabrics 00:21:24.978 rmmod nvme_keyring 00:21:24.978 13:31:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.978 13:31:42 -- nvmf/common.sh@124 -- # set -e 00:21:24.978 13:31:42 -- nvmf/common.sh@125 -- # return 0 00:21:24.978 13:31:42 -- nvmf/common.sh@478 -- # '[' -n 73631 ']' 00:21:24.978 13:31:42 -- nvmf/common.sh@479 -- # killprocess 73631 00:21:24.978 13:31:42 -- common/autotest_common.sh@936 -- # '[' -z 73631 ']' 00:21:24.978 13:31:42 -- common/autotest_common.sh@940 -- # kill -0 73631 00:21:24.978 13:31:42 -- common/autotest_common.sh@941 -- # uname 00:21:24.978 13:31:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:24.978 13:31:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73631 00:21:24.978 13:31:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:24.978 13:31:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:24.978 13:31:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73631' 00:21:24.978 killing process with pid 73631 00:21:24.978 13:31:42 -- common/autotest_common.sh@955 -- # kill 73631 00:21:24.978 13:31:42 -- common/autotest_common.sh@960 -- # wait 73631 00:21:25.236 13:31:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:25.236 13:31:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:25.236 13:31:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:25.236 13:31:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.236 13:31:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.236 13:31:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.236 13:31:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.236 13:31:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.236 13:31:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:25.236 00:21:25.236 real 0m4.328s 00:21:25.236 user 0m18.845s 00:21:25.236 sys 0m2.110s 00:21:25.236 13:31:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.236 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:21:25.236 ************************************ 00:21:25.236 END TEST nvmf_bdev_io_wait 00:21:25.236 ************************************ 00:21:25.236 13:31:42 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:25.236 13:31:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:25.236 13:31:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:25.236 13:31:42 -- common/autotest_common.sh@10 -- # set +x 00:21:25.236 ************************************ 00:21:25.236 START TEST nvmf_queue_depth 00:21:25.236 ************************************ 00:21:25.236 13:31:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:25.495 * Looking for test storage... 00:21:25.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:25.495 13:31:42 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:25.495 13:31:42 -- nvmf/common.sh@7 -- # uname -s 00:21:25.495 13:31:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.495 13:31:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.495 13:31:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.495 13:31:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.495 13:31:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.495 13:31:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.495 13:31:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.495 13:31:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.495 13:31:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.495 13:31:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.495 13:31:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:21:25.495 13:31:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:21:25.495 13:31:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.495 13:31:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.495 13:31:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:25.495 13:31:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.495 13:31:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:25.495 13:31:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.495 13:31:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.495 13:31:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.495 13:31:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.495 13:31:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.495 13:31:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.495 13:31:42 -- paths/export.sh@5 -- # export PATH 00:21:25.495 13:31:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.495 13:31:42 -- nvmf/common.sh@47 -- # : 0 00:21:25.495 13:31:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:25.495 13:31:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:25.495 13:31:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.495 13:31:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.495 13:31:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.495 13:31:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:25.495 13:31:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:25.495 13:31:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:25.495 13:31:42 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:21:25.495 13:31:42 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:21:25.495 13:31:42 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:25.495 13:31:42 -- target/queue_depth.sh@19 -- # nvmftestinit 00:21:25.495 13:31:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:25.495 13:31:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.495 13:31:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:25.495 13:31:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:25.495 13:31:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:25.495 13:31:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.495 13:31:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.495 13:31:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.495 13:31:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:25.495 13:31:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:25.495 13:31:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:25.495 13:31:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:25.495 13:31:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:25.495 13:31:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:25.495 13:31:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.495 13:31:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.495 13:31:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:25.495 13:31:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:25.495 13:31:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:25.495 13:31:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:25.495 13:31:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:25.495 13:31:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.495 13:31:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:25.495 13:31:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:25.495 13:31:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:25.495 13:31:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:25.495 13:31:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:25.495 13:31:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:25.495 Cannot find device "nvmf_tgt_br" 00:21:25.495 13:31:42 -- nvmf/common.sh@155 -- # true 00:21:25.495 13:31:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:25.495 Cannot find device "nvmf_tgt_br2" 00:21:25.495 13:31:42 -- nvmf/common.sh@156 -- # true 00:21:25.496 13:31:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:25.496 13:31:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:25.496 Cannot find device "nvmf_tgt_br" 00:21:25.496 13:31:42 -- nvmf/common.sh@158 -- # true 00:21:25.496 13:31:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:25.496 Cannot find device "nvmf_tgt_br2" 00:21:25.496 13:31:42 -- nvmf/common.sh@159 -- # true 00:21:25.496 13:31:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:25.496 13:31:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:25.496 13:31:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:25.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.496 13:31:42 -- nvmf/common.sh@162 -- # true 00:21:25.496 13:31:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:25.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:25.496 13:31:42 -- nvmf/common.sh@163 -- # true 00:21:25.496 13:31:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:25.496 13:31:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:25.496 13:31:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:25.496 13:31:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:25.496 13:31:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:25.754 13:31:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:25.754 13:31:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:25.754 13:31:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:25.754 13:31:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:25.754 13:31:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:25.754 13:31:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:25.754 13:31:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:25.754 13:31:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:25.754 13:31:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:25.754 13:31:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:25.754 13:31:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:25.754 13:31:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:25.754 13:31:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:25.754 13:31:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:25.754 13:31:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:25.754 13:31:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:25.754 13:31:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:25.754 13:31:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:25.754 13:31:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:25.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:21:25.754 00:21:25.754 --- 10.0.0.2 ping statistics --- 00:21:25.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.755 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:25.755 13:31:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:25.755 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:25.755 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:21:25.755 00:21:25.755 --- 10.0.0.3 ping statistics --- 00:21:25.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.755 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:25.755 13:31:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:25.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:25.755 00:21:25.755 --- 10.0.0.1 ping statistics --- 00:21:25.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.755 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:25.755 13:31:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.755 13:31:43 -- nvmf/common.sh@422 -- # return 0 00:21:25.755 13:31:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:25.755 13:31:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.755 13:31:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:25.755 13:31:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:25.755 13:31:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.755 13:31:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:25.755 13:31:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:25.755 13:31:43 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:21:25.755 13:31:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:25.755 13:31:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:25.755 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:25.755 13:31:43 -- nvmf/common.sh@470 -- # nvmfpid=73932 00:21:25.755 13:31:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:25.755 13:31:43 -- nvmf/common.sh@471 -- # waitforlisten 73932 00:21:25.755 13:31:43 -- common/autotest_common.sh@817 -- # '[' -z 73932 ']' 00:21:25.755 13:31:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.755 13:31:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:25.755 13:31:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.755 13:31:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:25.755 13:31:43 -- common/autotest_common.sh@10 -- # set +x 00:21:25.755 [2024-04-26 13:31:43.171025] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:25.755 [2024-04-26 13:31:43.171112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.013 [2024-04-26 13:31:43.307316] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.013 [2024-04-26 13:31:43.433081] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.013 [2024-04-26 13:31:43.433145] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.013 [2024-04-26 13:31:43.433161] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.013 [2024-04-26 13:31:43.433172] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.013 [2024-04-26 13:31:43.433181] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.013 [2024-04-26 13:31:43.433218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.949 13:31:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:26.949 13:31:44 -- common/autotest_common.sh@850 -- # return 0 00:21:26.949 13:31:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:26.949 13:31:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:26.949 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.949 13:31:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.949 13:31:44 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.949 13:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.950 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.950 [2024-04-26 13:31:44.265776] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.950 13:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.950 13:31:44 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:26.950 13:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.950 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.950 Malloc0 00:21:26.950 13:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.950 13:31:44 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.950 13:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.950 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.950 13:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.950 13:31:44 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.950 13:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.950 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.950 13:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.950 13:31:44 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.950 13:31:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:26.950 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.950 [2024-04-26 13:31:44.325869] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.950 13:31:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:26.950 13:31:44 -- target/queue_depth.sh@30 -- # bdevperf_pid=73982 00:21:26.950 13:31:44 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.950 13:31:44 -- target/queue_depth.sh@33 -- # waitforlisten 73982 /var/tmp/bdevperf.sock 00:21:26.950 13:31:44 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:21:26.950 13:31:44 -- common/autotest_common.sh@817 -- # '[' -z 73982 ']' 00:21:26.950 13:31:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.950 13:31:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:26.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.950 13:31:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.950 13:31:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:26.950 13:31:44 -- common/autotest_common.sh@10 -- # set +x 00:21:26.950 [2024-04-26 13:31:44.377673] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:26.950 [2024-04-26 13:31:44.377770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73982 ] 00:21:27.209 [2024-04-26 13:31:44.516899] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.209 [2024-04-26 13:31:44.640183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.143 13:31:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:28.143 13:31:45 -- common/autotest_common.sh@850 -- # return 0 00:21:28.143 13:31:45 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.143 13:31:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:28.143 13:31:45 -- common/autotest_common.sh@10 -- # set +x 00:21:28.143 NVMe0n1 00:21:28.143 13:31:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:28.143 13:31:45 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.401 Running I/O for 10 seconds... 00:21:38.467 00:21:38.467 Latency(us) 00:21:38.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.467 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:21:38.467 Verification LBA range: start 0x0 length 0x4000 00:21:38.467 NVMe0n1 : 10.09 8378.69 32.73 0.00 0.00 121599.69 28240.06 80549.70 00:21:38.467 =================================================================================================================== 00:21:38.467 Total : 8378.69 32.73 0.00 0.00 121599.69 28240.06 80549.70 00:21:38.467 0 00:21:38.467 13:31:55 -- target/queue_depth.sh@39 -- # killprocess 73982 00:21:38.467 13:31:55 -- common/autotest_common.sh@936 -- # '[' -z 73982 ']' 00:21:38.467 13:31:55 -- common/autotest_common.sh@940 -- # kill -0 73982 00:21:38.467 13:31:55 -- common/autotest_common.sh@941 -- # uname 00:21:38.467 13:31:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.467 13:31:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73982 00:21:38.467 killing process with pid 73982 00:21:38.467 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.467 00:21:38.468 Latency(us) 00:21:38.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.468 =================================================================================================================== 00:21:38.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.468 13:31:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:38.468 13:31:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:38.468 13:31:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73982' 00:21:38.468 13:31:55 -- common/autotest_common.sh@955 -- # kill 73982 00:21:38.468 13:31:55 -- common/autotest_common.sh@960 -- # wait 73982 00:21:38.726 13:31:56 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:38.726 13:31:56 -- target/queue_depth.sh@43 -- # nvmftestfini 00:21:38.726 13:31:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:38.726 13:31:56 -- nvmf/common.sh@117 -- # sync 00:21:38.726 13:31:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.726 13:31:56 -- nvmf/common.sh@120 -- # set +e 00:21:38.726 13:31:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.726 13:31:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.726 rmmod nvme_tcp 00:21:38.726 rmmod nvme_fabrics 00:21:38.726 rmmod nvme_keyring 00:21:38.726 13:31:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.726 13:31:56 -- nvmf/common.sh@124 -- # set -e 00:21:38.726 13:31:56 -- nvmf/common.sh@125 -- # return 0 00:21:38.726 13:31:56 -- nvmf/common.sh@478 -- # '[' -n 73932 ']' 00:21:38.726 13:31:56 -- nvmf/common.sh@479 -- # killprocess 73932 00:21:38.726 13:31:56 -- common/autotest_common.sh@936 -- # '[' -z 73932 ']' 00:21:38.726 13:31:56 -- common/autotest_common.sh@940 -- # kill -0 73932 00:21:38.726 13:31:56 -- common/autotest_common.sh@941 -- # uname 00:21:38.726 13:31:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.726 13:31:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73932 00:21:38.726 killing process with pid 73932 00:21:38.726 13:31:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:38.726 13:31:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:38.726 13:31:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73932' 00:21:38.726 13:31:56 -- common/autotest_common.sh@955 -- # kill 73932 00:21:38.726 13:31:56 -- common/autotest_common.sh@960 -- # wait 73932 00:21:38.984 13:31:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:38.984 13:31:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:38.984 13:31:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:38.984 13:31:56 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.984 13:31:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.984 13:31:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.984 13:31:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.984 13:31:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.242 13:31:56 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:39.242 00:21:39.242 real 0m13.800s 00:21:39.242 user 0m24.039s 00:21:39.242 sys 0m1.988s 00:21:39.242 13:31:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.242 13:31:56 -- common/autotest_common.sh@10 -- # set +x 00:21:39.242 ************************************ 00:21:39.242 END TEST nvmf_queue_depth 00:21:39.242 ************************************ 00:21:39.242 13:31:56 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:39.242 13:31:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:39.242 13:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.242 13:31:56 -- common/autotest_common.sh@10 -- # set +x 00:21:39.242 ************************************ 00:21:39.242 START TEST nvmf_multipath 00:21:39.242 ************************************ 00:21:39.242 13:31:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:39.242 * Looking for test storage... 00:21:39.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:39.242 13:31:56 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:39.242 13:31:56 -- nvmf/common.sh@7 -- # uname -s 00:21:39.242 13:31:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.242 13:31:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.242 13:31:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.242 13:31:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.242 13:31:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.242 13:31:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.242 13:31:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.242 13:31:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.242 13:31:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.242 13:31:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.501 13:31:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:21:39.501 13:31:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:21:39.501 13:31:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.501 13:31:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.501 13:31:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:39.501 13:31:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.501 13:31:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:39.501 13:31:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.501 13:31:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.501 13:31:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.501 13:31:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 13:31:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 13:31:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 13:31:56 -- paths/export.sh@5 -- # export PATH 00:21:39.501 13:31:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.501 13:31:56 -- nvmf/common.sh@47 -- # : 0 00:21:39.501 13:31:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.501 13:31:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.501 13:31:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.501 13:31:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.501 13:31:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.501 13:31:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.501 13:31:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.501 13:31:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.501 13:31:56 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.501 13:31:56 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.501 13:31:56 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:39.501 13:31:56 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:39.501 13:31:56 -- target/multipath.sh@43 -- # nvmftestinit 00:21:39.501 13:31:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:39.501 13:31:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.501 13:31:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:39.501 13:31:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:39.501 13:31:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:39.501 13:31:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.501 13:31:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.501 13:31:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.501 13:31:56 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:39.501 13:31:56 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:39.501 13:31:56 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:39.501 13:31:56 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:39.501 13:31:56 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:39.501 13:31:56 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:39.501 13:31:56 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.501 13:31:56 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.501 13:31:56 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:39.501 13:31:56 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:39.501 13:31:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:39.501 13:31:56 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:39.501 13:31:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:39.501 13:31:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.501 13:31:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:39.501 13:31:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:39.501 13:31:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:39.501 13:31:56 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:39.501 13:31:56 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:39.501 13:31:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:39.501 Cannot find device "nvmf_tgt_br" 00:21:39.501 13:31:56 -- nvmf/common.sh@155 -- # true 00:21:39.501 13:31:56 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:39.501 Cannot find device "nvmf_tgt_br2" 00:21:39.501 13:31:56 -- nvmf/common.sh@156 -- # true 00:21:39.501 13:31:56 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:39.501 13:31:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:39.501 Cannot find device "nvmf_tgt_br" 00:21:39.501 13:31:56 -- nvmf/common.sh@158 -- # true 00:21:39.501 13:31:56 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:39.501 Cannot find device "nvmf_tgt_br2" 00:21:39.501 13:31:56 -- nvmf/common.sh@159 -- # true 00:21:39.501 13:31:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:39.501 13:31:56 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:39.501 13:31:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:39.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.501 13:31:56 -- nvmf/common.sh@162 -- # true 00:21:39.501 13:31:56 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:39.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:39.501 13:31:56 -- nvmf/common.sh@163 -- # true 00:21:39.501 13:31:56 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:39.501 13:31:56 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:39.501 13:31:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:39.501 13:31:56 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:39.501 13:31:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:39.501 13:31:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:39.501 13:31:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:39.501 13:31:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:39.501 13:31:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:39.501 13:31:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:39.501 13:31:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:39.501 13:31:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:39.501 13:31:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:39.501 13:31:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:39.760 13:31:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:39.760 13:31:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:39.760 13:31:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:39.760 13:31:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:39.760 13:31:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:39.760 13:31:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:39.760 13:31:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:39.760 13:31:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:39.760 13:31:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:39.760 13:31:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:39.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:21:39.760 00:21:39.760 --- 10.0.0.2 ping statistics --- 00:21:39.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.760 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:39.760 13:31:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:39.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:39.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:39.760 00:21:39.760 --- 10.0.0.3 ping statistics --- 00:21:39.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.760 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:39.760 13:31:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:39.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:21:39.760 00:21:39.760 --- 10.0.0.1 ping statistics --- 00:21:39.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.760 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:39.760 13:31:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.760 13:31:57 -- nvmf/common.sh@422 -- # return 0 00:21:39.760 13:31:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:39.760 13:31:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.760 13:31:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:39.761 13:31:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:39.761 13:31:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.761 13:31:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:39.761 13:31:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:39.761 13:31:57 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:21:39.761 13:31:57 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:21:39.761 13:31:57 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:21:39.761 13:31:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:39.761 13:31:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:39.761 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:21:39.761 13:31:57 -- nvmf/common.sh@470 -- # nvmfpid=74324 00:21:39.761 13:31:57 -- nvmf/common.sh@471 -- # waitforlisten 74324 00:21:39.761 13:31:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:39.761 13:31:57 -- common/autotest_common.sh@817 -- # '[' -z 74324 ']' 00:21:39.761 13:31:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.761 13:31:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:39.761 13:31:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.761 13:31:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:39.761 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:21:39.761 [2024-04-26 13:31:57.143071] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:21:39.761 [2024-04-26 13:31:57.143188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.019 [2024-04-26 13:31:57.283907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.019 [2024-04-26 13:31:57.446664] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.019 [2024-04-26 13:31:57.446738] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.019 [2024-04-26 13:31:57.446754] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.019 [2024-04-26 13:31:57.446765] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.019 [2024-04-26 13:31:57.446774] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.019 [2024-04-26 13:31:57.446940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.019 [2024-04-26 13:31:57.449870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.019 [2024-04-26 13:31:57.449975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.019 [2024-04-26 13:31:57.450106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.980 13:31:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:40.980 13:31:58 -- common/autotest_common.sh@850 -- # return 0 00:21:40.980 13:31:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:40.980 13:31:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:40.980 13:31:58 -- common/autotest_common.sh@10 -- # set +x 00:21:40.980 13:31:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.980 13:31:58 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:40.980 [2024-04-26 13:31:58.389026] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.238 13:31:58 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:41.238 Malloc0 00:21:41.496 13:31:58 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:21:41.755 13:31:58 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.014 13:31:59 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.273 [2024-04-26 13:31:59.533476] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.273 13:31:59 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:42.531 [2024-04-26 13:31:59.769890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:42.531 13:31:59 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:21:42.789 13:32:00 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:21:42.789 13:32:00 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:21:42.789 13:32:00 -- common/autotest_common.sh@1184 -- # local i=0 00:21:42.789 13:32:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:42.789 13:32:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:42.789 13:32:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:45.321 13:32:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:45.321 13:32:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:45.321 13:32:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:45.321 13:32:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:45.321 13:32:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:45.321 13:32:02 -- common/autotest_common.sh@1194 -- # return 0 00:21:45.321 13:32:02 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:21:45.321 13:32:02 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:21:45.321 13:32:02 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:21:45.321 13:32:02 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:21:45.321 13:32:02 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:21:45.321 13:32:02 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:21:45.321 13:32:02 -- target/multipath.sh@38 -- # return 0 00:21:45.321 13:32:02 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:21:45.321 13:32:02 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:21:45.321 13:32:02 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:21:45.321 13:32:02 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:21:45.321 13:32:02 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:21:45.321 13:32:02 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:21:45.321 13:32:02 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:21:45.321 13:32:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:45.321 13:32:02 -- target/multipath.sh@22 -- # local timeout=20 00:21:45.321 13:32:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:45.321 13:32:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:45.321 13:32:02 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:45.321 13:32:02 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:21:45.321 13:32:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:45.321 13:32:02 -- target/multipath.sh@22 -- # local timeout=20 00:21:45.321 13:32:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:45.321 13:32:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:45.321 13:32:02 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:45.321 13:32:02 -- target/multipath.sh@85 -- # echo numa 00:21:45.321 13:32:02 -- target/multipath.sh@88 -- # fio_pid=74467 00:21:45.321 13:32:02 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:45.321 13:32:02 -- target/multipath.sh@90 -- # sleep 1 00:21:45.321 [global] 00:21:45.321 thread=1 00:21:45.321 invalidate=1 00:21:45.321 rw=randrw 00:21:45.321 time_based=1 00:21:45.321 runtime=6 00:21:45.321 ioengine=libaio 00:21:45.321 direct=1 00:21:45.321 bs=4096 00:21:45.321 iodepth=128 00:21:45.321 norandommap=0 00:21:45.321 numjobs=1 00:21:45.321 00:21:45.321 verify_dump=1 00:21:45.321 verify_backlog=512 00:21:45.321 verify_state_save=0 00:21:45.321 do_verify=1 00:21:45.321 verify=crc32c-intel 00:21:45.321 [job0] 00:21:45.321 filename=/dev/nvme0n1 00:21:45.321 Could not set queue depth (nvme0n1) 00:21:45.321 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:45.321 fio-3.35 00:21:45.321 Starting 1 thread 00:21:45.886 13:32:03 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:46.144 13:32:03 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:46.402 13:32:03 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:21:46.402 13:32:03 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:46.402 13:32:03 -- target/multipath.sh@22 -- # local timeout=20 00:21:46.402 13:32:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:46.402 13:32:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:46.402 13:32:03 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:46.402 13:32:03 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:21:46.402 13:32:03 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:46.402 13:32:03 -- target/multipath.sh@22 -- # local timeout=20 00:21:46.402 13:32:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:46.402 13:32:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:46.402 13:32:03 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:46.402 13:32:03 -- target/multipath.sh@25 -- # sleep 1s 00:21:47.775 13:32:04 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:47.775 13:32:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:47.775 13:32:04 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:47.775 13:32:04 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:47.775 13:32:05 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:48.032 13:32:05 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:21:48.032 13:32:05 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:48.032 13:32:05 -- target/multipath.sh@22 -- # local timeout=20 00:21:48.032 13:32:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:48.032 13:32:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:48.032 13:32:05 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:48.032 13:32:05 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:21:48.032 13:32:05 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:48.032 13:32:05 -- target/multipath.sh@22 -- # local timeout=20 00:21:48.032 13:32:05 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:48.032 13:32:05 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:48.032 13:32:05 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:48.032 13:32:05 -- target/multipath.sh@25 -- # sleep 1s 00:21:48.968 13:32:06 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:48.968 13:32:06 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:48.968 13:32:06 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:48.968 13:32:06 -- target/multipath.sh@104 -- # wait 74467 00:21:51.499 00:21:51.499 job0: (groupid=0, jobs=1): err= 0: pid=74488: Fri Apr 26 13:32:08 2024 00:21:51.499 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(237MiB/6003msec) 00:21:51.499 slat (usec): min=2, max=6407, avg=57.30, stdev=258.59 00:21:51.499 clat (usec): min=555, max=17573, avg=8576.86, stdev=1347.54 00:21:51.499 lat (usec): min=630, max=17588, avg=8634.16, stdev=1358.60 00:21:51.499 clat percentiles (usec): 00:21:51.499 | 1.00th=[ 5145], 5.00th=[ 6456], 10.00th=[ 7242], 20.00th=[ 7767], 00:21:51.499 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:21:51.499 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10945], 00:21:51.499 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14615], 99.95th=[16712], 00:21:51.499 | 99.99th=[17433] 00:21:51.499 bw ( KiB/s): min=11768, max=26120, per=53.92%, avg=21800.67, stdev=4044.10, samples=12 00:21:51.499 iops : min= 2942, max= 6530, avg=5450.17, stdev=1011.03, samples=12 00:21:51.499 write: IOPS=5823, BW=22.7MiB/s (23.9MB/s)(128MiB/5618msec); 0 zone resets 00:21:51.499 slat (usec): min=4, max=6396, avg=66.97, stdev=176.85 00:21:51.499 clat (usec): min=582, max=17080, avg=7365.28, stdev=1135.86 00:21:51.499 lat (usec): min=623, max=17104, avg=7432.25, stdev=1139.79 00:21:51.499 clat percentiles (usec): 00:21:51.499 | 1.00th=[ 4015], 5.00th=[ 5342], 10.00th=[ 6259], 20.00th=[ 6718], 00:21:51.499 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:21:51.499 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8356], 95.00th=[ 8717], 00:21:51.499 | 99.00th=[10945], 99.50th=[11994], 99.90th=[15270], 99.95th=[16712], 00:21:51.499 | 99.99th=[16909] 00:21:51.499 bw ( KiB/s): min=11864, max=25880, per=93.47%, avg=21772.00, stdev=3930.87, samples=12 00:21:51.499 iops : min= 2966, max= 6470, avg=5443.00, stdev=982.72, samples=12 00:21:51.499 lat (usec) : 750=0.01% 00:21:51.499 lat (msec) : 2=0.01%, 4=0.41%, 10=92.20%, 20=7.38% 00:21:51.499 cpu : usr=5.01%, sys=22.13%, ctx=5888, majf=0, minf=76 00:21:51.499 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:51.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.499 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.499 issued rwts: total=60676,32715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.499 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.499 00:21:51.499 Run status group 0 (all jobs): 00:21:51.499 READ: bw=39.5MiB/s (41.4MB/s), 39.5MiB/s-39.5MiB/s (41.4MB/s-41.4MB/s), io=237MiB (249MB), run=6003-6003msec 00:21:51.499 WRITE: bw=22.7MiB/s (23.9MB/s), 22.7MiB/s-22.7MiB/s (23.9MB/s-23.9MB/s), io=128MiB (134MB), run=5618-5618msec 00:21:51.499 00:21:51.499 Disk stats (read/write): 00:21:51.499 nvme0n1: ios=60007/31919, merge=0/0, ticks=482918/219646, in_queue=702564, util=98.63% 00:21:51.499 13:32:08 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:51.499 13:32:08 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:51.757 13:32:09 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:21:51.757 13:32:09 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:51.757 13:32:09 -- target/multipath.sh@22 -- # local timeout=20 00:21:51.757 13:32:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:51.757 13:32:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:51.757 13:32:09 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:51.757 13:32:09 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:21:51.757 13:32:09 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:51.757 13:32:09 -- target/multipath.sh@22 -- # local timeout=20 00:21:51.757 13:32:09 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:51.757 13:32:09 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:51.757 13:32:09 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:21:51.757 13:32:09 -- target/multipath.sh@25 -- # sleep 1s 00:21:52.692 13:32:10 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:52.692 13:32:10 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:52.692 13:32:10 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:52.692 13:32:10 -- target/multipath.sh@113 -- # echo round-robin 00:21:52.692 13:32:10 -- target/multipath.sh@116 -- # fio_pid=74615 00:21:52.692 13:32:10 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:52.692 13:32:10 -- target/multipath.sh@118 -- # sleep 1 00:21:52.692 [global] 00:21:52.692 thread=1 00:21:52.692 invalidate=1 00:21:52.692 rw=randrw 00:21:52.692 time_based=1 00:21:52.692 runtime=6 00:21:52.692 ioengine=libaio 00:21:52.692 direct=1 00:21:52.692 bs=4096 00:21:52.692 iodepth=128 00:21:52.692 norandommap=0 00:21:52.692 numjobs=1 00:21:52.692 00:21:52.692 verify_dump=1 00:21:52.692 verify_backlog=512 00:21:52.692 verify_state_save=0 00:21:52.692 do_verify=1 00:21:52.692 verify=crc32c-intel 00:21:52.692 [job0] 00:21:52.692 filename=/dev/nvme0n1 00:21:52.692 Could not set queue depth (nvme0n1) 00:21:52.951 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:52.951 fio-3.35 00:21:52.951 Starting 1 thread 00:21:53.886 13:32:11 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:54.175 13:32:11 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:54.441 13:32:11 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:21:54.441 13:32:11 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:54.441 13:32:11 -- target/multipath.sh@22 -- # local timeout=20 00:21:54.441 13:32:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:54.441 13:32:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:54.441 13:32:11 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:54.441 13:32:11 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:21:54.441 13:32:11 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:54.441 13:32:11 -- target/multipath.sh@22 -- # local timeout=20 00:21:54.441 13:32:11 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:54.441 13:32:11 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:54.441 13:32:11 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:54.441 13:32:11 -- target/multipath.sh@25 -- # sleep 1s 00:21:55.377 13:32:12 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:55.377 13:32:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:55.377 13:32:12 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:55.377 13:32:12 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:55.636 13:32:12 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:55.894 13:32:13 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:21:55.894 13:32:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:55.894 13:32:13 -- target/multipath.sh@22 -- # local timeout=20 00:21:55.894 13:32:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:55.894 13:32:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:55.894 13:32:13 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:55.894 13:32:13 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:21:55.894 13:32:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:55.894 13:32:13 -- target/multipath.sh@22 -- # local timeout=20 00:21:55.894 13:32:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:55.894 13:32:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:55.894 13:32:13 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:55.894 13:32:13 -- target/multipath.sh@25 -- # sleep 1s 00:21:56.828 13:32:14 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:56.828 13:32:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:56.828 13:32:14 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:56.828 13:32:14 -- target/multipath.sh@132 -- # wait 74615 00:21:59.393 00:21:59.393 job0: (groupid=0, jobs=1): err= 0: pid=74636: Fri Apr 26 13:32:16 2024 00:21:59.393 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(240MiB/6007msec) 00:21:59.393 slat (usec): min=2, max=5611, avg=48.83, stdev=236.81 00:21:59.393 clat (usec): min=323, max=20953, avg=8605.74, stdev=2378.09 00:21:59.393 lat (usec): min=350, max=20963, avg=8654.57, stdev=2385.80 00:21:59.393 clat percentiles (usec): 00:21:59.393 | 1.00th=[ 2376], 5.00th=[ 3982], 10.00th=[ 5473], 20.00th=[ 7504], 00:21:59.393 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 8848], 00:21:59.393 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[11338], 95.00th=[12649], 00:21:59.393 | 99.00th=[15139], 99.50th=[16450], 99.90th=[19006], 99.95th=[19792], 00:21:59.393 | 99.99th=[20579] 00:21:59.393 bw ( KiB/s): min= 2392, max=30088, per=51.78%, avg=21158.09, stdev=7324.08, samples=11 00:21:59.393 iops : min= 598, max= 7522, avg=5289.45, stdev=1830.98, samples=11 00:21:59.393 write: IOPS=6006, BW=23.5MiB/s (24.6MB/s)(126MiB/5362msec); 0 zone resets 00:21:59.393 slat (usec): min=3, max=6358, avg=58.94, stdev=159.11 00:21:59.393 clat (usec): min=836, max=18228, avg=7266.94, stdev=2119.89 00:21:59.393 lat (usec): min=930, max=18251, avg=7325.89, stdev=2124.71 00:21:59.393 clat percentiles (usec): 00:21:59.393 | 1.00th=[ 2040], 5.00th=[ 2933], 10.00th=[ 3884], 20.00th=[ 6194], 00:21:59.393 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7832], 00:21:59.393 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 9503], 95.00th=[10683], 00:21:59.393 | 99.00th=[12256], 99.50th=[13042], 99.90th=[15401], 99.95th=[16057], 00:21:59.393 | 99.99th=[17171] 00:21:59.393 bw ( KiB/s): min= 2472, max=29248, per=88.26%, avg=21205.82, stdev=7235.24, samples=11 00:21:59.393 iops : min= 618, max= 7312, avg=5301.45, stdev=1808.81, samples=11 00:21:59.393 lat (usec) : 500=0.02%, 750=0.05%, 1000=0.07% 00:21:59.393 lat (msec) : 2=0.69%, 4=6.21%, 10=77.33%, 20=15.61%, 50=0.01% 00:21:59.393 cpu : usr=5.66%, sys=21.91%, ctx=6334, majf=0, minf=121 00:21:59.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:59.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:59.393 issued rwts: total=61363,32206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:59.393 00:21:59.393 Run status group 0 (all jobs): 00:21:59.393 READ: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=240MiB (251MB), run=6007-6007msec 00:21:59.393 WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=126MiB (132MB), run=5362-5362msec 00:21:59.393 00:21:59.393 Disk stats (read/write): 00:21:59.393 nvme0n1: ios=60439/31618, merge=0/0, ticks=490011/215928, in_queue=705939, util=98.75% 00:21:59.394 13:32:16 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:59.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:59.394 13:32:16 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:59.394 13:32:16 -- common/autotest_common.sh@1205 -- # local i=0 00:21:59.394 13:32:16 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:59.394 13:32:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:59.394 13:32:16 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:59.394 13:32:16 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:59.394 13:32:16 -- common/autotest_common.sh@1217 -- # return 0 00:21:59.394 13:32:16 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.394 13:32:16 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:21:59.394 13:32:16 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:21:59.394 13:32:16 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:21:59.394 13:32:16 -- target/multipath.sh@144 -- # nvmftestfini 00:21:59.394 13:32:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:59.394 13:32:16 -- nvmf/common.sh@117 -- # sync 00:21:59.394 13:32:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.394 13:32:16 -- nvmf/common.sh@120 -- # set +e 00:21:59.394 13:32:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.394 13:32:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.394 rmmod nvme_tcp 00:21:59.394 rmmod nvme_fabrics 00:21:59.394 rmmod nvme_keyring 00:21:59.394 13:32:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.394 13:32:16 -- nvmf/common.sh@124 -- # set -e 00:21:59.394 13:32:16 -- nvmf/common.sh@125 -- # return 0 00:21:59.394 13:32:16 -- nvmf/common.sh@478 -- # '[' -n 74324 ']' 00:21:59.394 13:32:16 -- nvmf/common.sh@479 -- # killprocess 74324 00:21:59.394 13:32:16 -- common/autotest_common.sh@936 -- # '[' -z 74324 ']' 00:21:59.394 13:32:16 -- common/autotest_common.sh@940 -- # kill -0 74324 00:21:59.394 13:32:16 -- common/autotest_common.sh@941 -- # uname 00:21:59.394 13:32:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:59.652 13:32:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74324 00:21:59.652 killing process with pid 74324 00:21:59.652 13:32:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:59.652 13:32:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:59.652 13:32:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74324' 00:21:59.652 13:32:16 -- common/autotest_common.sh@955 -- # kill 74324 00:21:59.652 13:32:16 -- common/autotest_common.sh@960 -- # wait 74324 00:21:59.910 13:32:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:59.910 13:32:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:59.910 13:32:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:59.910 13:32:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.910 13:32:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.910 13:32:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.910 13:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.910 13:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.910 13:32:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:59.910 00:21:59.910 real 0m20.714s 00:21:59.910 user 1m20.869s 00:21:59.910 sys 0m6.143s 00:21:59.911 13:32:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:59.911 13:32:17 -- common/autotest_common.sh@10 -- # set +x 00:21:59.911 ************************************ 00:21:59.911 END TEST nvmf_multipath 00:21:59.911 ************************************ 00:21:59.911 13:32:17 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:59.911 13:32:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:59.911 13:32:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:59.911 13:32:17 -- common/autotest_common.sh@10 -- # set +x 00:22:00.169 ************************************ 00:22:00.169 START TEST nvmf_zcopy 00:22:00.169 ************************************ 00:22:00.169 13:32:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:22:00.169 * Looking for test storage... 00:22:00.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:00.169 13:32:17 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:00.169 13:32:17 -- nvmf/common.sh@7 -- # uname -s 00:22:00.169 13:32:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.169 13:32:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.169 13:32:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.169 13:32:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.169 13:32:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.169 13:32:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.169 13:32:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.169 13:32:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.169 13:32:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.169 13:32:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.169 13:32:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:00.169 13:32:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:00.169 13:32:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.169 13:32:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.169 13:32:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:00.169 13:32:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.169 13:32:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:00.169 13:32:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.169 13:32:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.169 13:32:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.169 13:32:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.170 13:32:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.170 13:32:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.170 13:32:17 -- paths/export.sh@5 -- # export PATH 00:22:00.170 13:32:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.170 13:32:17 -- nvmf/common.sh@47 -- # : 0 00:22:00.170 13:32:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.170 13:32:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.170 13:32:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.170 13:32:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.170 13:32:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.170 13:32:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.170 13:32:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.170 13:32:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.170 13:32:17 -- target/zcopy.sh@12 -- # nvmftestinit 00:22:00.170 13:32:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:00.170 13:32:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.170 13:32:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:00.170 13:32:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:00.170 13:32:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:00.170 13:32:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.170 13:32:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.170 13:32:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.170 13:32:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:00.170 13:32:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:00.170 13:32:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:00.170 13:32:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:00.170 13:32:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:00.170 13:32:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:00.170 13:32:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.170 13:32:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.170 13:32:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:00.170 13:32:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:00.170 13:32:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:00.170 13:32:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:00.170 13:32:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:00.170 13:32:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.170 13:32:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:00.170 13:32:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:00.170 13:32:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:00.170 13:32:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:00.170 13:32:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:00.170 13:32:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:00.170 Cannot find device "nvmf_tgt_br" 00:22:00.170 13:32:17 -- nvmf/common.sh@155 -- # true 00:22:00.170 13:32:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:00.170 Cannot find device "nvmf_tgt_br2" 00:22:00.170 13:32:17 -- nvmf/common.sh@156 -- # true 00:22:00.170 13:32:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:00.170 13:32:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:00.170 Cannot find device "nvmf_tgt_br" 00:22:00.170 13:32:17 -- nvmf/common.sh@158 -- # true 00:22:00.170 13:32:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:00.170 Cannot find device "nvmf_tgt_br2" 00:22:00.170 13:32:17 -- nvmf/common.sh@159 -- # true 00:22:00.170 13:32:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:00.429 13:32:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:00.429 13:32:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:00.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:00.429 13:32:17 -- nvmf/common.sh@162 -- # true 00:22:00.429 13:32:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:00.429 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:00.429 13:32:17 -- nvmf/common.sh@163 -- # true 00:22:00.429 13:32:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:00.429 13:32:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:00.429 13:32:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:00.429 13:32:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:00.429 13:32:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:00.429 13:32:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:00.429 13:32:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:00.429 13:32:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:00.429 13:32:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:00.429 13:32:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:00.429 13:32:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:00.429 13:32:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:00.429 13:32:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:00.429 13:32:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:00.429 13:32:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:00.429 13:32:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:00.429 13:32:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:00.429 13:32:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:00.429 13:32:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:00.429 13:32:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:00.429 13:32:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:00.429 13:32:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:00.429 13:32:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:00.429 13:32:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:00.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:00.429 00:22:00.429 --- 10.0.0.2 ping statistics --- 00:22:00.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.429 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:00.429 13:32:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:00.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:00.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:22:00.429 00:22:00.429 --- 10.0.0.3 ping statistics --- 00:22:00.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.429 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:00.429 13:32:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:00.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:00.688 00:22:00.688 --- 10.0.0.1 ping statistics --- 00:22:00.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.688 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:00.688 13:32:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.688 13:32:17 -- nvmf/common.sh@422 -- # return 0 00:22:00.688 13:32:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:00.688 13:32:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.688 13:32:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:00.688 13:32:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:00.688 13:32:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.688 13:32:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:00.688 13:32:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:00.688 13:32:17 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:22:00.688 13:32:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:00.688 13:32:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:00.688 13:32:17 -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 13:32:17 -- nvmf/common.sh@470 -- # nvmfpid=74924 00:22:00.688 13:32:17 -- nvmf/common.sh@471 -- # waitforlisten 74924 00:22:00.688 13:32:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:00.688 13:32:17 -- common/autotest_common.sh@817 -- # '[' -z 74924 ']' 00:22:00.688 13:32:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.688 13:32:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:00.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.688 13:32:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.688 13:32:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:00.688 13:32:17 -- common/autotest_common.sh@10 -- # set +x 00:22:00.688 [2024-04-26 13:32:17.962862] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:00.688 [2024-04-26 13:32:17.963009] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.688 [2024-04-26 13:32:18.100669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.964 [2024-04-26 13:32:18.238544] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.964 [2024-04-26 13:32:18.238896] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.964 [2024-04-26 13:32:18.239027] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.964 [2024-04-26 13:32:18.239126] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.964 [2024-04-26 13:32:18.239145] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.964 [2024-04-26 13:32:18.239193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.913 13:32:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:01.913 13:32:18 -- common/autotest_common.sh@850 -- # return 0 00:22:01.913 13:32:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:01.913 13:32:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:01.913 13:32:18 -- common/autotest_common.sh@10 -- # set +x 00:22:01.913 13:32:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.913 13:32:19 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:22:01.913 13:32:19 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:22:01.913 13:32:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.913 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.913 [2024-04-26 13:32:19.048698] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.913 13:32:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.913 13:32:19 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:22:01.913 13:32:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.913 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.913 13:32:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.913 13:32:19 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.913 13:32:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.913 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.913 [2024-04-26 13:32:19.064832] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.913 13:32:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.913 13:32:19 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:01.913 13:32:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.913 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.913 13:32:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.913 13:32:19 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:22:01.913 13:32:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.913 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.913 malloc0 00:22:01.913 13:32:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.913 13:32:19 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:01.913 13:32:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:01.913 13:32:19 -- common/autotest_common.sh@10 -- # set +x 00:22:01.913 13:32:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:01.913 13:32:19 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:22:01.913 13:32:19 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:22:01.913 13:32:19 -- nvmf/common.sh@521 -- # config=() 00:22:01.913 13:32:19 -- nvmf/common.sh@521 -- # local subsystem config 00:22:01.913 13:32:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:01.913 13:32:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:01.913 { 00:22:01.913 "params": { 00:22:01.913 "name": "Nvme$subsystem", 00:22:01.913 "trtype": "$TEST_TRANSPORT", 00:22:01.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:01.913 "adrfam": "ipv4", 00:22:01.913 "trsvcid": "$NVMF_PORT", 00:22:01.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:01.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:01.913 "hdgst": ${hdgst:-false}, 00:22:01.913 "ddgst": ${ddgst:-false} 00:22:01.913 }, 00:22:01.913 "method": "bdev_nvme_attach_controller" 00:22:01.913 } 00:22:01.913 EOF 00:22:01.913 )") 00:22:01.913 13:32:19 -- nvmf/common.sh@543 -- # cat 00:22:01.913 13:32:19 -- nvmf/common.sh@545 -- # jq . 00:22:01.913 13:32:19 -- nvmf/common.sh@546 -- # IFS=, 00:22:01.913 13:32:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:01.913 "params": { 00:22:01.913 "name": "Nvme1", 00:22:01.913 "trtype": "tcp", 00:22:01.913 "traddr": "10.0.0.2", 00:22:01.913 "adrfam": "ipv4", 00:22:01.913 "trsvcid": "4420", 00:22:01.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.913 "hdgst": false, 00:22:01.913 "ddgst": false 00:22:01.913 }, 00:22:01.913 "method": "bdev_nvme_attach_controller" 00:22:01.913 }' 00:22:01.913 [2024-04-26 13:32:19.159538] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:01.913 [2024-04-26 13:32:19.159631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74976 ] 00:22:01.913 [2024-04-26 13:32:19.291095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.171 [2024-04-26 13:32:19.439943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.429 Running I/O for 10 seconds... 00:22:12.482 00:22:12.482 Latency(us) 00:22:12.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:12.482 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:12.482 Verification LBA range: start 0x0 length 0x1000 00:22:12.482 Nvme1n1 : 10.01 5623.87 43.94 0.00 0.00 22688.81 316.51 32887.16 00:22:12.482 =================================================================================================================== 00:22:12.482 Total : 5623.87 43.94 0.00 0.00 22688.81 316.51 32887.16 00:22:12.482 13:32:29 -- target/zcopy.sh@39 -- # perfpid=75100 00:22:12.482 13:32:29 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:12.482 13:32:29 -- target/zcopy.sh@41 -- # xtrace_disable 00:22:12.482 13:32:29 -- common/autotest_common.sh@10 -- # set +x 00:22:12.482 13:32:29 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:12.482 13:32:29 -- nvmf/common.sh@521 -- # config=() 00:22:12.482 13:32:29 -- nvmf/common.sh@521 -- # local subsystem config 00:22:12.482 13:32:29 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:12.482 13:32:29 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:12.482 { 00:22:12.482 "params": { 00:22:12.482 "name": "Nvme$subsystem", 00:22:12.482 "trtype": "$TEST_TRANSPORT", 00:22:12.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.482 "adrfam": "ipv4", 00:22:12.482 "trsvcid": "$NVMF_PORT", 00:22:12.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.482 "hdgst": ${hdgst:-false}, 00:22:12.482 "ddgst": ${ddgst:-false} 00:22:12.482 }, 00:22:12.482 "method": "bdev_nvme_attach_controller" 00:22:12.482 } 00:22:12.482 EOF 00:22:12.482 )") 00:22:12.482 13:32:29 -- nvmf/common.sh@543 -- # cat 00:22:12.482 [2024-04-26 13:32:29.911076] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.482 [2024-04-26 13:32:29.911323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.482 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.482 13:32:29 -- nvmf/common.sh@545 -- # jq . 00:22:12.482 13:32:29 -- nvmf/common.sh@546 -- # IFS=, 00:22:12.482 13:32:29 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:12.482 "params": { 00:22:12.482 "name": "Nvme1", 00:22:12.482 "trtype": "tcp", 00:22:12.482 "traddr": "10.0.0.2", 00:22:12.482 "adrfam": "ipv4", 00:22:12.482 "trsvcid": "4420", 00:22:12.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.482 "hdgst": false, 00:22:12.482 "ddgst": false 00:22:12.482 }, 00:22:12.482 "method": "bdev_nvme_attach_controller" 00:22:12.482 }' 00:22:12.482 [2024-04-26 13:32:29.923002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.482 [2024-04-26 13:32:29.923036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.482 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:29.935013] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:29.935049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:29.947001] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:29.947032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 [2024-04-26 13:32:29.949587] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:12.790 [2024-04-26 13:32:29.949671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75100 ] 00:22:12.790 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:29.959002] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:29.959031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:29.971011] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:29.971040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:29.983033] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:29.983082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:29.995031] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:29.995060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.007028] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.007056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.019058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.019094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.031051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.031086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.043041] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.043072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.055040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.055069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.067030] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.067060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.079049] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.079076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 [2024-04-26 13:32:30.081889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.091074] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.091115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.103045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.103075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.115061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.115088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.790 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.790 [2024-04-26 13:32:30.127060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.790 [2024-04-26 13:32:30.127085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.791 [2024-04-26 13:32:30.139089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.791 [2024-04-26 13:32:30.139128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.791 [2024-04-26 13:32:30.151084] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.791 [2024-04-26 13:32:30.151119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.791 [2024-04-26 13:32:30.163126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.791 [2024-04-26 13:32:30.163163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.791 [2024-04-26 13:32:30.175114] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.791 [2024-04-26 13:32:30.175150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 [2024-04-26 13:32:30.177435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.791 [2024-04-26 13:32:30.187115] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.791 [2024-04-26 13:32:30.187151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.791 [2024-04-26 13:32:30.199123] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.791 [2024-04-26 13:32:30.199160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:12.791 [2024-04-26 13:32:30.211145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:12.791 [2024-04-26 13:32:30.211190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:12.791 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.049 [2024-04-26 13:32:30.223150] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.049 [2024-04-26 13:32:30.223192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.049 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.049 [2024-04-26 13:32:30.235143] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.049 [2024-04-26 13:32:30.235182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.049 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.049 [2024-04-26 13:32:30.247140] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.049 [2024-04-26 13:32:30.247178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.049 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.049 [2024-04-26 13:32:30.259148] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.049 [2024-04-26 13:32:30.259186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.271126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.271156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.283153] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.283190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.295159] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.295192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.307144] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.307192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.319145] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.319176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.331154] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.331186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.343183] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.343218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.355249] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.355285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 Running I/O for 5 seconds... 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.371065] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.371106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.387245] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.387282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.404866] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.404909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.421112] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.421151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.438200] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.438237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.454220] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.454282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.471295] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.471330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.050 [2024-04-26 13:32:30.485849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.050 [2024-04-26 13:32:30.485891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.050 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.502875] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.502912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.518719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.518756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.536541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.536594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.552801] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.552854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.571739] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.571808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.587052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.587091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.604348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.604386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.620778] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.620842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.638351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.638392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.655227] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.655281] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.672211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.672253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.688988] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.689032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.706469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.706511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.722868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.722906] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.734345] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.734382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.309 [2024-04-26 13:32:30.749688] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.309 [2024-04-26 13:32:30.749739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.309 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.568 [2024-04-26 13:32:30.766470] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.568 [2024-04-26 13:32:30.766510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.568 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.568 [2024-04-26 13:32:30.782027] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.568 [2024-04-26 13:32:30.782065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.568 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.568 [2024-04-26 13:32:30.799509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.568 [2024-04-26 13:32:30.799552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.568 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.568 [2024-04-26 13:32:30.815432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.568 [2024-04-26 13:32:30.815485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.568 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.568 [2024-04-26 13:32:30.832646] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.568 [2024-04-26 13:32:30.832682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.568 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.568 [2024-04-26 13:32:30.849052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.568 [2024-04-26 13:32:30.849105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.568 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.568 [2024-04-26 13:32:30.864772] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.568 [2024-04-26 13:32:30.864835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.880690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.880740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.898051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.898105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.915277] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.915347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.932018] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.932074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.948405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.948458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.965134] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.965204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.981737] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.981773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:30.998479] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:30.998517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.569 [2024-04-26 13:32:31.014362] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.569 [2024-04-26 13:32:31.014402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.569 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.828 [2024-04-26 13:32:31.025481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.828 [2024-04-26 13:32:31.025517] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.828 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.828 [2024-04-26 13:32:31.040199] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.828 [2024-04-26 13:32:31.040250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.828 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.828 [2024-04-26 13:32:31.056260] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.828 [2024-04-26 13:32:31.056314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.073403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.073455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.089089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.089124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.100147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.100183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.115238] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.115290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.132072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.132108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.148040] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.148076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.159302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.159342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.174987] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.175041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.191441] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.191493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.207838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.207887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.223883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.223933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.234814] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.234894] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.249782] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.249844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.260400] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.260449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:13.829 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:13.829 [2024-04-26 13:32:31.275996] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:13.829 [2024-04-26 13:32:31.276050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.292405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.292457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.309366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.309405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.325454] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.325508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.343804] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.343854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.358450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.358488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.374955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.375004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.391751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.391817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.408126] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.408176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.425006] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.425089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.441668] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.441720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.456607] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.456643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.473810] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.473876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.489686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.489721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.505838] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.505897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.088 [2024-04-26 13:32:31.522106] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.088 [2024-04-26 13:32:31.522144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.088 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.347 [2024-04-26 13:32:31.538058] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.347 [2024-04-26 13:32:31.538097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.347 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.347 [2024-04-26 13:32:31.554168] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.347 [2024-04-26 13:32:31.554218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.347 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.347 [2024-04-26 13:32:31.570955] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.347 [2024-04-26 13:32:31.571042] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.347 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.347 [2024-04-26 13:32:31.587089] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.587139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.597292] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.597343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.611353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.611402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.626299] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.626338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.642511] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.642551] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.659950] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.659990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.676450] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.676505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.693531] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.693592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.710204] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.710286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.727226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.727297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.743749] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.743860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.759639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.759692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.775466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.775515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.348 [2024-04-26 13:32:31.787221] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.348 [2024-04-26 13:32:31.787273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.348 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.803179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.803232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.819975] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.820012] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.837323] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.837360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.853472] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.853522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.870212] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.870291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.886858] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.886895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.903735] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.903799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.919751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.919816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.936496] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.936533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.953045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.953099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.969907] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.969959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:31.986457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:31.986493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:31 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:32.003678] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:32.003717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:32.019788] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:32.019855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.607 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.607 [2024-04-26 13:32:32.036082] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.607 [2024-04-26 13:32:32.036120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.608 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.608 [2024-04-26 13:32:32.053585] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.608 [2024-04-26 13:32:32.053649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.867 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.867 [2024-04-26 13:32:32.069597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.867 [2024-04-26 13:32:32.069659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.867 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.867 [2024-04-26 13:32:32.080436] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.867 [2024-04-26 13:32:32.080567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.867 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.867 [2024-04-26 13:32:32.096481] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.867 [2024-04-26 13:32:32.096546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.867 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.867 [2024-04-26 13:32:32.112149] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.867 [2024-04-26 13:32:32.112221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.867 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.867 [2024-04-26 13:32:32.128122] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.867 [2024-04-26 13:32:32.128158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.867 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.138700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.138735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.154807] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.154845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.171883] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.171936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.187541] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.187593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.203877] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.203927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.221597] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.221648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.237302] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.237354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.255322] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.255391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.272570] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.272646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.288276] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.288349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:14.868 [2024-04-26 13:32:32.299813] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:14.868 [2024-04-26 13:32:32.299881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:14.868 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.315446] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.315518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.331283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.331335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.342226] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.342288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.358144] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.358179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.373733] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.373768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.390748] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.390799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.406745] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.406821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.418368] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.418408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.433339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.433390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.449968] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.450019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.466525] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.466565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.485124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.485192] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.501272] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.501326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.518563] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.518663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.535266] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.535341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.551958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.552019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.127 [2024-04-26 13:32:32.568539] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.127 [2024-04-26 13:32:32.568575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.127 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.585781] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.585844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.602072] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.602108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.618019] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.618070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.628878] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.628912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.643933] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.643967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.659761] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.659811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.676500] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.676534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.690825] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.690902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.707726] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.707767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.724373] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.724429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.739434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.739487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.755475] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.755554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.766232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.766282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.781198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.781266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.798124] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.798175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.814880] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.814931] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.386 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.386 [2024-04-26 13:32:32.832045] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.386 [2024-04-26 13:32:32.832105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.849217] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.849272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.865290] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.865348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.881823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.881874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.898052] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.898102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.915571] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.915621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.931452] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.931504] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.948708] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.948758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.965012] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.965073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.982187] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.982248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.646 2024/04/26 13:32:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.646 [2024-04-26 13:32:32.997689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.646 [2024-04-26 13:32:32.997727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.647 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.647 [2024-04-26 13:32:33.014259] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.647 [2024-04-26 13:32:33.014311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.647 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.647 [2024-04-26 13:32:33.031198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.647 [2024-04-26 13:32:33.031259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.647 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.647 [2024-04-26 13:32:33.047927] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.647 [2024-04-26 13:32:33.047980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.647 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.647 [2024-04-26 13:32:33.064206] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.647 [2024-04-26 13:32:33.064250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.647 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.647 [2024-04-26 13:32:33.080959] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.647 [2024-04-26 13:32:33.080997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.647 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.097340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.097393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.114046] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.114083] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.131432] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.131485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.147732] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.147820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.164701] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.164768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.180555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.180591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.197198] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.197267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.213315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.213356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.230657] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.230711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.247457] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.247493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.263690] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.905 [2024-04-26 13:32:33.263744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.905 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.905 [2024-04-26 13:32:33.280912] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.906 [2024-04-26 13:32:33.280962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.906 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.906 [2024-04-26 13:32:33.296957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.906 [2024-04-26 13:32:33.296994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.906 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.906 [2024-04-26 13:32:33.313596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.906 [2024-04-26 13:32:33.313646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.906 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.906 [2024-04-26 13:32:33.330370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.906 [2024-04-26 13:32:33.330406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.906 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:15.906 [2024-04-26 13:32:33.348321] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:15.906 [2024-04-26 13:32:33.348384] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:15.906 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.364750] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.364821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.380660] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.380712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.399181] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.399247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.415232] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.415294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.431639] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.431691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.447691] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.447745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.458235] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.458286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.473849] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.473905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.490617] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.490660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.506868] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.506951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.524448] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.524483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.540870] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.540907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.556587] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.556655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.568088] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.568140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.164 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.164 [2024-04-26 13:32:33.583850] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.164 [2024-04-26 13:32:33.583916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.165 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.165 [2024-04-26 13:32:33.600601] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.165 [2024-04-26 13:32:33.600654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.165 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.617808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.617860] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.633905] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.633941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.650379] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.650432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.667044] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.667108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.683632] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.683684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.699982] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.700018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.717086] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.717134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.733957] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.734006] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.423 [2024-04-26 13:32:33.750636] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.423 [2024-04-26 13:32:33.750670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.423 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.424 [2024-04-26 13:32:33.767215] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.424 [2024-04-26 13:32:33.767269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.424 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.424 [2024-04-26 13:32:33.783823] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.424 [2024-04-26 13:32:33.783859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.424 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.424 [2024-04-26 13:32:33.799699] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.424 [2024-04-26 13:32:33.799735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.424 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.424 [2024-04-26 13:32:33.817225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.424 [2024-04-26 13:32:33.817328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.424 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.424 [2024-04-26 13:32:33.833802] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.424 [2024-04-26 13:32:33.833853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.424 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.424 [2024-04-26 13:32:33.850958] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.424 [2024-04-26 13:32:33.851007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.424 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.424 [2024-04-26 13:32:33.868579] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.424 [2024-04-26 13:32:33.868650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.682 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.682 [2024-04-26 13:32:33.886469] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.682 [2024-04-26 13:32:33.886509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.682 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.682 [2024-04-26 13:32:33.904253] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.682 [2024-04-26 13:32:33.904329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.682 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.682 [2024-04-26 13:32:33.921254] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.682 [2024-04-26 13:32:33.921325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.682 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.682 [2024-04-26 13:32:33.938403] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.682 [2024-04-26 13:32:33.938451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.682 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.682 [2024-04-26 13:32:33.955700] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.682 [2024-04-26 13:32:33.955749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.682 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.682 [2024-04-26 13:32:33.972343] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.682 [2024-04-26 13:32:33.972395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.682 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:33.982635] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:33.982688] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:33.998351] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:33.998392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.014719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.014774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.031225] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.031303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.047574] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.047614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.063942] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.063992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.075315] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.075392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.090650] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.090704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.101564] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.101606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.683 [2024-04-26 13:32:34.116946] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.683 [2024-04-26 13:32:34.116998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.683 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.134686] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.134743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.151248] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.151290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.167890] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.167941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.185060] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.185120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.202021] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.202071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.218008] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.218048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.234237] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.234301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.244039] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.244076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.259406] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.259489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.275523] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.275570] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.292353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.292405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.308365] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.308418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.324509] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.324547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.341453] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.341495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.357724] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.357769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:16.941 [2024-04-26 13:32:34.374051] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:16.941 [2024-04-26 13:32:34.374089] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:16.941 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.390596] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.390655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.407555] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.407636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.423960] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.423997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.440861] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.440919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.457394] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.457486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.473032] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.473123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.484077] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.484118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.500147] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.500186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.516512] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.516568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.532552] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.532609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.543278] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.543317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.558713] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.558750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.201 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.201 [2024-04-26 13:32:34.575296] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.201 [2024-04-26 13:32:34.575356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.202 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.202 [2024-04-26 13:32:34.592353] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.202 [2024-04-26 13:32:34.592428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.202 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.202 [2024-04-26 13:32:34.609622] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.202 [2024-04-26 13:32:34.609721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.202 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.202 [2024-04-26 13:32:34.626497] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.202 [2024-04-26 13:32:34.626534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.202 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.202 [2024-04-26 13:32:34.642887] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.202 [2024-04-26 13:32:34.642928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.202 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.659569] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.659609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.676489] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.676528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.692765] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.692822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.702992] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.703033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.718376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.718421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.735689] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.735754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.752382] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.752426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.768275] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.768315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.785418] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.785456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.461 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.461 [2024-04-26 13:32:34.801742] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.461 [2024-04-26 13:32:34.801797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.462 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.462 [2024-04-26 13:32:34.818994] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.462 [2024-04-26 13:32:34.819043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.462 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.462 [2024-04-26 13:32:34.836133] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.462 [2024-04-26 13:32:34.836199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.462 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.462 [2024-04-26 13:32:34.852599] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.462 [2024-04-26 13:32:34.852643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.462 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.462 [2024-04-26 13:32:34.869102] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.462 [2024-04-26 13:32:34.869163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.462 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.462 [2024-04-26 13:32:34.886811] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.462 [2024-04-26 13:32:34.886867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.462 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.462 [2024-04-26 13:32:34.903677] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.462 [2024-04-26 13:32:34.903757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.462 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.721 [2024-04-26 13:32:34.920751] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.721 [2024-04-26 13:32:34.920848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.721 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.721 [2024-04-26 13:32:34.938429] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.721 [2024-04-26 13:32:34.938498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.721 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.721 [2024-04-26 13:32:34.956466] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.721 [2024-04-26 13:32:34.956510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:34.973184] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:34.973251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:34.990949] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:34.991019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.007881] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.007933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.022675] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.022742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.039808] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.039856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.056795] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.056846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.073283] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.073323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.090165] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.090204] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.107586] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.107672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.124068] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.124143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.141211] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.141307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.722 [2024-04-26 13:32:35.158758] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.722 [2024-04-26 13:32:35.158867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.722 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.996 [2024-04-26 13:32:35.175692] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.175748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.192179] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.192215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.209719] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.209798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.224978] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.225030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.242364] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.242404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.259327] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.259381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.277061] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.277129] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.294158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.294210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.310444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.310482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.328201] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.328241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.345158] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.345197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.361062] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.361103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 00:22:17.997 Latency(us) 00:22:17.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.997 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:17.997 Nvme1n1 : 5.01 10897.14 85.13 0.00 0.00 11730.44 4736.47 21209.83 00:22:17.997 =================================================================================================================== 00:22:17.997 Total : 10897.14 85.13 0.00 0.00 11730.44 4736.47 21209.83 00:22:17.997 [2024-04-26 13:32:35.373264] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.373293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.385257] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.385292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.393247] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.393280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.405370] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.405415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.417328] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.417381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.997 [2024-04-26 13:32:35.429336] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:17.997 [2024-04-26 13:32:35.429391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.997 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.441341] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.441396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.453349] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.453406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.465349] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.465398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.477340] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.477388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.489339] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.489394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.501389] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.501443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.513319] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.513360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.525348] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.525401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.537376] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.537410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.549414] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.549457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.284 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.284 [2024-04-26 13:32:35.561397] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.284 [2024-04-26 13:32:35.561457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 [2024-04-26 13:32:35.573366] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.285 [2024-04-26 13:32:35.573393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 [2024-04-26 13:32:35.585402] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.285 [2024-04-26 13:32:35.585442] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 [2024-04-26 13:32:35.597405] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.285 [2024-04-26 13:32:35.597451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 [2024-04-26 13:32:35.609434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.285 [2024-04-26 13:32:35.609482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 [2024-04-26 13:32:35.621444] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.285 [2024-04-26 13:32:35.621499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 [2024-04-26 13:32:35.633434] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.285 [2024-04-26 13:32:35.633474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 [2024-04-26 13:32:35.645396] subsystem.c:1896:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:18.285 [2024-04-26 13:32:35.645428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:18.285 2024/04/26 13:32:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:18.285 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (75100) - No such process 00:22:18.285 13:32:35 -- target/zcopy.sh@49 -- # wait 75100 00:22:18.285 13:32:35 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:18.285 13:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.285 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:22:18.285 13:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.285 13:32:35 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:18.285 13:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.285 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:22:18.285 delay0 00:22:18.285 13:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.285 13:32:35 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:18.285 13:32:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:18.285 13:32:35 -- common/autotest_common.sh@10 -- # set +x 00:22:18.285 13:32:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:18.285 13:32:35 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:18.544 [2024-04-26 13:32:35.846520] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:25.108 Initializing NVMe Controllers 00:22:25.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:25.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:25.108 Initialization complete. Launching workers. 00:22:25.108 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 71 00:22:25.108 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 358, failed to submit 33 00:22:25.108 success 178, unsuccess 180, failed 0 00:22:25.108 13:32:41 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:22:25.108 13:32:41 -- target/zcopy.sh@60 -- # nvmftestfini 00:22:25.108 13:32:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:25.108 13:32:41 -- nvmf/common.sh@117 -- # sync 00:22:25.108 13:32:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.108 13:32:41 -- nvmf/common.sh@120 -- # set +e 00:22:25.108 13:32:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.108 13:32:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.108 rmmod nvme_tcp 00:22:25.108 rmmod nvme_fabrics 00:22:25.108 rmmod nvme_keyring 00:22:25.108 13:32:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.108 13:32:41 -- nvmf/common.sh@124 -- # set -e 00:22:25.108 13:32:41 -- nvmf/common.sh@125 -- # return 0 00:22:25.108 13:32:41 -- nvmf/common.sh@478 -- # '[' -n 74924 ']' 00:22:25.108 13:32:41 -- nvmf/common.sh@479 -- # killprocess 74924 00:22:25.108 13:32:41 -- common/autotest_common.sh@936 -- # '[' -z 74924 ']' 00:22:25.108 13:32:41 -- common/autotest_common.sh@940 -- # kill -0 74924 00:22:25.108 13:32:41 -- common/autotest_common.sh@941 -- # uname 00:22:25.108 13:32:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.108 13:32:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74924 00:22:25.108 13:32:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:25.108 killing process with pid 74924 00:22:25.108 13:32:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:25.108 13:32:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74924' 00:22:25.108 13:32:42 -- common/autotest_common.sh@955 -- # kill 74924 00:22:25.108 13:32:42 -- common/autotest_common.sh@960 -- # wait 74924 00:22:25.108 13:32:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:25.108 13:32:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:25.109 13:32:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.109 13:32:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.109 13:32:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.109 13:32:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.109 13:32:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:25.109 00:22:25.109 real 0m24.910s 00:22:25.109 user 0m40.290s 00:22:25.109 sys 0m6.743s 00:22:25.109 ************************************ 00:22:25.109 END TEST nvmf_zcopy 00:22:25.109 ************************************ 00:22:25.109 13:32:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:25.109 13:32:42 -- common/autotest_common.sh@10 -- # set +x 00:22:25.109 13:32:42 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:25.109 13:32:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:25.109 13:32:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:25.109 13:32:42 -- common/autotest_common.sh@10 -- # set +x 00:22:25.109 ************************************ 00:22:25.109 START TEST nvmf_nmic 00:22:25.109 ************************************ 00:22:25.109 13:32:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:25.109 * Looking for test storage... 00:22:25.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:25.109 13:32:42 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:25.109 13:32:42 -- nvmf/common.sh@7 -- # uname -s 00:22:25.109 13:32:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.109 13:32:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.109 13:32:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.109 13:32:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.109 13:32:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.109 13:32:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.109 13:32:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.109 13:32:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.109 13:32:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.109 13:32:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:25.109 13:32:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:25.109 13:32:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.109 13:32:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.109 13:32:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:25.109 13:32:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.109 13:32:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:25.109 13:32:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.109 13:32:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.109 13:32:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.109 13:32:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.109 13:32:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.109 13:32:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.109 13:32:42 -- paths/export.sh@5 -- # export PATH 00:22:25.109 13:32:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.109 13:32:42 -- nvmf/common.sh@47 -- # : 0 00:22:25.109 13:32:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.109 13:32:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.109 13:32:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.109 13:32:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.109 13:32:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.109 13:32:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.109 13:32:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.109 13:32:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.109 13:32:42 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:25.109 13:32:42 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:25.109 13:32:42 -- target/nmic.sh@14 -- # nvmftestinit 00:22:25.109 13:32:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:25.109 13:32:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.109 13:32:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:25.109 13:32:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:25.109 13:32:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:25.109 13:32:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.109 13:32:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.109 13:32:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.109 13:32:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:25.109 13:32:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:25.109 13:32:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.109 13:32:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.109 13:32:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:25.109 13:32:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:25.109 13:32:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:25.109 13:32:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:25.109 13:32:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:25.109 13:32:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.109 13:32:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:25.109 13:32:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:25.368 13:32:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:25.368 13:32:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:25.368 13:32:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:25.368 13:32:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:25.368 Cannot find device "nvmf_tgt_br" 00:22:25.368 13:32:42 -- nvmf/common.sh@155 -- # true 00:22:25.368 13:32:42 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:25.368 Cannot find device "nvmf_tgt_br2" 00:22:25.368 13:32:42 -- nvmf/common.sh@156 -- # true 00:22:25.368 13:32:42 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:25.368 13:32:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:25.368 Cannot find device "nvmf_tgt_br" 00:22:25.368 13:32:42 -- nvmf/common.sh@158 -- # true 00:22:25.368 13:32:42 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:25.368 Cannot find device "nvmf_tgt_br2" 00:22:25.368 13:32:42 -- nvmf/common.sh@159 -- # true 00:22:25.368 13:32:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:25.368 13:32:42 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:25.368 13:32:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:25.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.368 13:32:42 -- nvmf/common.sh@162 -- # true 00:22:25.368 13:32:42 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:25.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:25.368 13:32:42 -- nvmf/common.sh@163 -- # true 00:22:25.368 13:32:42 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:25.368 13:32:42 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:25.368 13:32:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:25.368 13:32:42 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:25.368 13:32:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:25.368 13:32:42 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:25.368 13:32:42 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:25.368 13:32:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:25.368 13:32:42 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:25.368 13:32:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:25.368 13:32:42 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:25.368 13:32:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:25.368 13:32:42 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:25.626 13:32:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:25.626 13:32:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:25.626 13:32:42 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:25.626 13:32:42 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:25.626 13:32:42 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:25.626 13:32:42 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:25.626 13:32:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:25.626 13:32:42 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:25.626 13:32:42 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:25.626 13:32:42 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:25.626 13:32:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:25.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:22:25.626 00:22:25.626 --- 10.0.0.2 ping statistics --- 00:22:25.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.626 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:25.626 13:32:42 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:25.626 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:25.626 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:22:25.626 00:22:25.626 --- 10.0.0.3 ping statistics --- 00:22:25.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.626 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:22:25.626 13:32:42 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:25.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:22:25.626 00:22:25.626 --- 10.0.0.1 ping statistics --- 00:22:25.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.626 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:25.626 13:32:42 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.626 13:32:42 -- nvmf/common.sh@422 -- # return 0 00:22:25.626 13:32:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:25.626 13:32:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.626 13:32:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:25.626 13:32:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:25.626 13:32:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.626 13:32:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:25.626 13:32:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:25.626 13:32:42 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:22:25.626 13:32:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:25.626 13:32:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:25.626 13:32:42 -- common/autotest_common.sh@10 -- # set +x 00:22:25.626 13:32:42 -- nvmf/common.sh@470 -- # nvmfpid=75425 00:22:25.626 13:32:42 -- nvmf/common.sh@471 -- # waitforlisten 75425 00:22:25.626 13:32:42 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:25.626 13:32:42 -- common/autotest_common.sh@817 -- # '[' -z 75425 ']' 00:22:25.626 13:32:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.626 13:32:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.626 13:32:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.626 13:32:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.626 13:32:42 -- common/autotest_common.sh@10 -- # set +x 00:22:25.626 [2024-04-26 13:32:43.003083] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:25.626 [2024-04-26 13:32:43.003195] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.905 [2024-04-26 13:32:43.146434] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.905 [2024-04-26 13:32:43.314414] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.905 [2024-04-26 13:32:43.314503] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.905 [2024-04-26 13:32:43.314527] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.905 [2024-04-26 13:32:43.314544] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.905 [2024-04-26 13:32:43.314558] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.905 [2024-04-26 13:32:43.314758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.905 [2024-04-26 13:32:43.315292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.905 [2024-04-26 13:32:43.315402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.905 [2024-04-26 13:32:43.315403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.863 13:32:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.863 13:32:44 -- common/autotest_common.sh@850 -- # return 0 00:22:26.863 13:32:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:26.863 13:32:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:26.863 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.863 13:32:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.863 13:32:44 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.863 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.863 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.863 [2024-04-26 13:32:44.074759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.863 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.863 13:32:44 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:26.863 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.863 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.863 Malloc0 00:22:26.863 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.863 13:32:44 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:26.863 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.863 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.863 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.863 13:32:44 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:26.864 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.864 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.864 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.864 13:32:44 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:26.864 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.864 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.864 [2024-04-26 13:32:44.157122] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.864 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.864 test case1: single bdev can't be used in multiple subsystems 00:22:26.864 13:32:44 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:22:26.864 13:32:44 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:26.864 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.864 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.864 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.864 13:32:44 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:26.864 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.864 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.864 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.864 13:32:44 -- target/nmic.sh@28 -- # nmic_status=0 00:22:26.864 13:32:44 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:22:26.864 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.864 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.864 [2024-04-26 13:32:44.180734] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:22:26.864 [2024-04-26 13:32:44.180824] subsystem.c:1930:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:22:26.864 [2024-04-26 13:32:44.180856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:26.864 2024/04/26 13:32:44 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:26.864 request: 00:22:26.864 { 00:22:26.864 "method": "nvmf_subsystem_add_ns", 00:22:26.864 "params": { 00:22:26.864 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:22:26.864 "namespace": { 00:22:26.864 "bdev_name": "Malloc0", 00:22:26.864 "no_auto_visible": false 00:22:26.864 } 00:22:26.864 } 00:22:26.864 } 00:22:26.864 Got JSON-RPC error response 00:22:26.864 GoRPCClient: error on JSON-RPC call 00:22:26.864 13:32:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:26.864 13:32:44 -- target/nmic.sh@29 -- # nmic_status=1 00:22:26.864 13:32:44 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:22:26.864 Adding namespace failed - expected result. 00:22:26.864 13:32:44 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:22:26.864 test case2: host connect to nvmf target in multiple paths 00:22:26.864 13:32:44 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:22:26.864 13:32:44 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:26.864 13:32:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:26.864 13:32:44 -- common/autotest_common.sh@10 -- # set +x 00:22:26.864 [2024-04-26 13:32:44.192816] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:26.864 13:32:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:26.864 13:32:44 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:27.122 13:32:44 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:22:27.122 13:32:44 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:22:27.122 13:32:44 -- common/autotest_common.sh@1184 -- # local i=0 00:22:27.122 13:32:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.122 13:32:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:22:27.122 13:32:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:29.653 13:32:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:29.653 13:32:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:29.653 13:32:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:29.653 13:32:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:22:29.653 13:32:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.653 13:32:46 -- common/autotest_common.sh@1194 -- # return 0 00:22:29.654 13:32:46 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:29.654 [global] 00:22:29.654 thread=1 00:22:29.654 invalidate=1 00:22:29.654 rw=write 00:22:29.654 time_based=1 00:22:29.654 runtime=1 00:22:29.654 ioengine=libaio 00:22:29.654 direct=1 00:22:29.654 bs=4096 00:22:29.654 iodepth=1 00:22:29.654 norandommap=0 00:22:29.654 numjobs=1 00:22:29.654 00:22:29.654 verify_dump=1 00:22:29.654 verify_backlog=512 00:22:29.654 verify_state_save=0 00:22:29.654 do_verify=1 00:22:29.654 verify=crc32c-intel 00:22:29.654 [job0] 00:22:29.654 filename=/dev/nvme0n1 00:22:29.654 Could not set queue depth (nvme0n1) 00:22:29.654 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:29.654 fio-3.35 00:22:29.654 Starting 1 thread 00:22:30.589 00:22:30.589 job0: (groupid=0, jobs=1): err= 0: pid=75533: Fri Apr 26 13:32:47 2024 00:22:30.589 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:22:30.589 slat (nsec): min=13295, max=48815, avg=15286.78, stdev=2336.61 00:22:30.589 clat (usec): min=136, max=1587, avg=158.84, stdev=32.53 00:22:30.589 lat (usec): min=151, max=1602, avg=174.13, stdev=32.69 00:22:30.589 clat percentiles (usec): 00:22:30.589 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:22:30.589 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 157], 00:22:30.589 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 188], 00:22:30.589 | 99.00th=[ 204], 99.50th=[ 217], 99.90th=[ 461], 99.95th=[ 725], 00:22:30.589 | 99.99th=[ 1582] 00:22:30.589 write: IOPS=3342, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec); 0 zone resets 00:22:30.589 slat (usec): min=19, max=193, avg=22.07, stdev= 4.29 00:22:30.589 clat (usec): min=95, max=320, avg=113.68, stdev=12.26 00:22:30.589 lat (usec): min=115, max=513, avg=135.75, stdev=13.92 00:22:30.589 clat percentiles (usec): 00:22:30.589 | 1.00th=[ 99], 5.00th=[ 101], 10.00th=[ 102], 20.00th=[ 104], 00:22:30.589 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 114], 00:22:30.589 | 70.00th=[ 118], 80.00th=[ 124], 90.00th=[ 131], 95.00th=[ 137], 00:22:30.589 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 174], 99.95th=[ 215], 00:22:30.589 | 99.99th=[ 322] 00:22:30.589 bw ( KiB/s): min=12464, max=12464, per=93.22%, avg=12464.00, stdev= 0.00, samples=1 00:22:30.589 iops : min= 3116, max= 3116, avg=3116.00, stdev= 0.00, samples=1 00:22:30.589 lat (usec) : 100=1.70%, 250=98.21%, 500=0.06%, 750=0.02% 00:22:30.589 lat (msec) : 2=0.02% 00:22:30.589 cpu : usr=2.10%, sys=9.10%, ctx=6418, majf=0, minf=2 00:22:30.589 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:30.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.589 issued rwts: total=3072,3346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.589 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:30.589 00:22:30.589 Run status group 0 (all jobs): 00:22:30.589 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:22:30.589 WRITE: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=13.1MiB (13.7MB), run=1001-1001msec 00:22:30.589 00:22:30.589 Disk stats (read/write): 00:22:30.589 nvme0n1: ios=2753/3072, merge=0/0, ticks=471/373, in_queue=844, util=91.28% 00:22:30.589 13:32:47 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:30.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:22:30.589 13:32:47 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:30.589 13:32:47 -- common/autotest_common.sh@1205 -- # local i=0 00:22:30.589 13:32:47 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:30.589 13:32:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:30.589 13:32:47 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:30.589 13:32:47 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:30.589 13:32:47 -- common/autotest_common.sh@1217 -- # return 0 00:22:30.589 13:32:47 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:30.589 13:32:47 -- target/nmic.sh@53 -- # nvmftestfini 00:22:30.589 13:32:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:30.589 13:32:47 -- nvmf/common.sh@117 -- # sync 00:22:30.589 13:32:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:30.589 13:32:47 -- nvmf/common.sh@120 -- # set +e 00:22:30.590 13:32:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:30.590 13:32:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:30.590 rmmod nvme_tcp 00:22:30.590 rmmod nvme_fabrics 00:22:30.590 rmmod nvme_keyring 00:22:30.590 13:32:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:30.848 13:32:48 -- nvmf/common.sh@124 -- # set -e 00:22:30.848 13:32:48 -- nvmf/common.sh@125 -- # return 0 00:22:30.848 13:32:48 -- nvmf/common.sh@478 -- # '[' -n 75425 ']' 00:22:30.848 13:32:48 -- nvmf/common.sh@479 -- # killprocess 75425 00:22:30.848 13:32:48 -- common/autotest_common.sh@936 -- # '[' -z 75425 ']' 00:22:30.848 13:32:48 -- common/autotest_common.sh@940 -- # kill -0 75425 00:22:30.848 13:32:48 -- common/autotest_common.sh@941 -- # uname 00:22:30.848 13:32:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:30.848 13:32:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75425 00:22:30.848 13:32:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:30.848 13:32:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:30.848 killing process with pid 75425 00:22:30.848 13:32:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75425' 00:22:30.848 13:32:48 -- common/autotest_common.sh@955 -- # kill 75425 00:22:30.848 13:32:48 -- common/autotest_common.sh@960 -- # wait 75425 00:22:31.107 13:32:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:31.107 13:32:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:31.107 13:32:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:31.107 13:32:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:31.107 13:32:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:31.107 13:32:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.107 13:32:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.107 13:32:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.107 13:32:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:31.107 00:22:31.107 real 0m5.953s 00:22:31.107 user 0m19.562s 00:22:31.107 sys 0m1.498s 00:22:31.107 13:32:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:31.107 13:32:48 -- common/autotest_common.sh@10 -- # set +x 00:22:31.107 ************************************ 00:22:31.107 END TEST nvmf_nmic 00:22:31.107 ************************************ 00:22:31.107 13:32:48 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:31.107 13:32:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:31.107 13:32:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:31.107 13:32:48 -- common/autotest_common.sh@10 -- # set +x 00:22:31.107 ************************************ 00:22:31.107 START TEST nvmf_fio_target 00:22:31.107 ************************************ 00:22:31.107 13:32:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:31.367 * Looking for test storage... 00:22:31.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:31.367 13:32:48 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:31.367 13:32:48 -- nvmf/common.sh@7 -- # uname -s 00:22:31.367 13:32:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.367 13:32:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.367 13:32:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.367 13:32:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.367 13:32:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.367 13:32:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.367 13:32:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.367 13:32:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.367 13:32:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.367 13:32:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.367 13:32:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:31.367 13:32:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:31.367 13:32:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.367 13:32:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.367 13:32:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.367 13:32:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.367 13:32:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.367 13:32:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.367 13:32:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.367 13:32:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.367 13:32:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.367 13:32:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.367 13:32:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.367 13:32:48 -- paths/export.sh@5 -- # export PATH 00:22:31.367 13:32:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.367 13:32:48 -- nvmf/common.sh@47 -- # : 0 00:22:31.367 13:32:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:31.367 13:32:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:31.367 13:32:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.367 13:32:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.367 13:32:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.367 13:32:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:31.367 13:32:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:31.367 13:32:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:31.367 13:32:48 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:31.367 13:32:48 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:31.367 13:32:48 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:31.367 13:32:48 -- target/fio.sh@16 -- # nvmftestinit 00:22:31.367 13:32:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:31.367 13:32:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.367 13:32:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:31.367 13:32:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:31.367 13:32:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:31.367 13:32:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.367 13:32:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:31.367 13:32:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.367 13:32:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:31.367 13:32:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:31.367 13:32:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:31.367 13:32:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:31.367 13:32:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:31.367 13:32:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:31.367 13:32:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.367 13:32:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.367 13:32:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:31.367 13:32:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:31.367 13:32:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.367 13:32:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.367 13:32:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.367 13:32:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.367 13:32:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.367 13:32:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.367 13:32:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.367 13:32:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.367 13:32:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:31.367 13:32:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:31.367 Cannot find device "nvmf_tgt_br" 00:22:31.367 13:32:48 -- nvmf/common.sh@155 -- # true 00:22:31.367 13:32:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.367 Cannot find device "nvmf_tgt_br2" 00:22:31.367 13:32:48 -- nvmf/common.sh@156 -- # true 00:22:31.367 13:32:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:31.367 13:32:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:31.367 Cannot find device "nvmf_tgt_br" 00:22:31.367 13:32:48 -- nvmf/common.sh@158 -- # true 00:22:31.367 13:32:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:31.367 Cannot find device "nvmf_tgt_br2" 00:22:31.367 13:32:48 -- nvmf/common.sh@159 -- # true 00:22:31.367 13:32:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:31.367 13:32:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:31.367 13:32:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.367 13:32:48 -- nvmf/common.sh@162 -- # true 00:22:31.367 13:32:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:31.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.367 13:32:48 -- nvmf/common.sh@163 -- # true 00:22:31.367 13:32:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:31.367 13:32:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:31.367 13:32:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:31.367 13:32:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:31.626 13:32:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:31.626 13:32:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:31.626 13:32:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:31.626 13:32:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:31.626 13:32:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:31.626 13:32:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:31.627 13:32:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:31.627 13:32:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:31.627 13:32:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:31.627 13:32:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:31.627 13:32:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:31.627 13:32:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:31.627 13:32:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:31.627 13:32:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:31.627 13:32:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:31.627 13:32:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:31.627 13:32:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:31.627 13:32:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:31.627 13:32:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:31.627 13:32:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:31.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:22:31.627 00:22:31.627 --- 10.0.0.2 ping statistics --- 00:22:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.627 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:22:31.627 13:32:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:31.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:31.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:22:31.627 00:22:31.627 --- 10.0.0.3 ping statistics --- 00:22:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.627 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:22:31.627 13:32:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:31.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:22:31.627 00:22:31.627 --- 10.0.0.1 ping statistics --- 00:22:31.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.627 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:22:31.627 13:32:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.627 13:32:49 -- nvmf/common.sh@422 -- # return 0 00:22:31.627 13:32:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:31.627 13:32:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.627 13:32:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:31.627 13:32:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:31.627 13:32:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.627 13:32:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:31.627 13:32:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:31.627 13:32:49 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:22:31.627 13:32:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:31.627 13:32:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:31.627 13:32:49 -- common/autotest_common.sh@10 -- # set +x 00:22:31.627 13:32:49 -- nvmf/common.sh@470 -- # nvmfpid=75724 00:22:31.627 13:32:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.627 13:32:49 -- nvmf/common.sh@471 -- # waitforlisten 75724 00:22:31.627 13:32:49 -- common/autotest_common.sh@817 -- # '[' -z 75724 ']' 00:22:31.627 13:32:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.627 13:32:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:31.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.627 13:32:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.627 13:32:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:31.627 13:32:49 -- common/autotest_common.sh@10 -- # set +x 00:22:31.901 [2024-04-26 13:32:49.106534] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:31.901 [2024-04-26 13:32:49.106679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.901 [2024-04-26 13:32:49.258875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.160 [2024-04-26 13:32:49.390736] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.160 [2024-04-26 13:32:49.390833] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.160 [2024-04-26 13:32:49.390849] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.160 [2024-04-26 13:32:49.390860] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.160 [2024-04-26 13:32:49.390870] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.160 [2024-04-26 13:32:49.391238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.160 [2024-04-26 13:32:49.391378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.160 [2024-04-26 13:32:49.391472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.160 [2024-04-26 13:32:49.391478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.728 13:32:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:32.728 13:32:50 -- common/autotest_common.sh@850 -- # return 0 00:22:32.728 13:32:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:32.728 13:32:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:32.728 13:32:50 -- common/autotest_common.sh@10 -- # set +x 00:22:32.728 13:32:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.728 13:32:50 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:32.986 [2024-04-26 13:32:50.349657] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.986 13:32:50 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:33.246 13:32:50 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:22:33.246 13:32:50 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:33.505 13:32:50 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:22:33.505 13:32:50 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:34.071 13:32:51 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:22:34.071 13:32:51 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:34.071 13:32:51 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:22:34.071 13:32:51 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:22:34.330 13:32:51 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:34.899 13:32:52 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:22:34.899 13:32:52 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:35.158 13:32:52 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:22:35.158 13:32:52 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:35.417 13:32:52 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:22:35.417 13:32:52 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:22:35.417 13:32:52 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:35.676 13:32:53 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:35.676 13:32:53 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.936 13:32:53 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:35.936 13:32:53 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:36.194 13:32:53 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.453 [2024-04-26 13:32:53.755430] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.453 13:32:53 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:22:36.712 13:32:54 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:22:36.970 13:32:54 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:36.970 13:32:54 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:22:36.970 13:32:54 -- common/autotest_common.sh@1184 -- # local i=0 00:22:36.970 13:32:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:22:36.970 13:32:54 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:22:36.970 13:32:54 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:22:36.970 13:32:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:22:39.503 13:32:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:22:39.503 13:32:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:22:39.503 13:32:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:22:39.503 13:32:56 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:22:39.503 13:32:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:22:39.503 13:32:56 -- common/autotest_common.sh@1194 -- # return 0 00:22:39.503 13:32:56 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:39.503 [global] 00:22:39.503 thread=1 00:22:39.503 invalidate=1 00:22:39.503 rw=write 00:22:39.503 time_based=1 00:22:39.503 runtime=1 00:22:39.503 ioengine=libaio 00:22:39.503 direct=1 00:22:39.503 bs=4096 00:22:39.503 iodepth=1 00:22:39.503 norandommap=0 00:22:39.503 numjobs=1 00:22:39.503 00:22:39.503 verify_dump=1 00:22:39.503 verify_backlog=512 00:22:39.503 verify_state_save=0 00:22:39.503 do_verify=1 00:22:39.503 verify=crc32c-intel 00:22:39.503 [job0] 00:22:39.503 filename=/dev/nvme0n1 00:22:39.503 [job1] 00:22:39.503 filename=/dev/nvme0n2 00:22:39.503 [job2] 00:22:39.503 filename=/dev/nvme0n3 00:22:39.503 [job3] 00:22:39.503 filename=/dev/nvme0n4 00:22:39.503 Could not set queue depth (nvme0n1) 00:22:39.503 Could not set queue depth (nvme0n2) 00:22:39.503 Could not set queue depth (nvme0n3) 00:22:39.503 Could not set queue depth (nvme0n4) 00:22:39.503 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.503 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.503 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.503 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:39.503 fio-3.35 00:22:39.503 Starting 4 threads 00:22:40.442 00:22:40.442 job0: (groupid=0, jobs=1): err= 0: pid=76012: Fri Apr 26 13:32:57 2024 00:22:40.442 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:22:40.442 slat (nsec): min=13956, max=69635, avg=19912.69, stdev=6232.97 00:22:40.442 clat (usec): min=147, max=524, avg=293.21, stdev=81.69 00:22:40.442 lat (usec): min=165, max=549, avg=313.12, stdev=86.05 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 184], 20.00th=[ 239], 00:22:40.442 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 297], 00:22:40.442 | 70.00th=[ 314], 80.00th=[ 383], 90.00th=[ 433], 95.00th=[ 445], 00:22:40.442 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 515], 99.95th=[ 523], 00:22:40.442 | 99.99th=[ 523] 00:22:40.442 write: IOPS=1925, BW=7700KiB/s (7885kB/s)(7708KiB/1001msec); 0 zone resets 00:22:40.442 slat (usec): min=22, max=179, avg=34.08, stdev= 8.27 00:22:40.442 clat (usec): min=107, max=2787, avg=230.87, stdev=87.35 00:22:40.442 lat (usec): min=138, max=2850, avg=264.95, stdev=88.59 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 135], 00:22:40.442 | 30.00th=[ 225], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:22:40.442 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:22:40.442 | 99.00th=[ 314], 99.50th=[ 351], 99.90th=[ 742], 99.95th=[ 2802], 00:22:40.442 | 99.99th=[ 2802] 00:22:40.442 bw ( KiB/s): min= 8192, max= 8192, per=25.40%, avg=8192.00, stdev= 0.00, samples=1 00:22:40.442 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:22:40.442 lat (usec) : 250=36.01%, 500=63.85%, 750=0.12% 00:22:40.442 lat (msec) : 4=0.03% 00:22:40.442 cpu : usr=1.80%, sys=7.20%, ctx=3465, majf=0, minf=7 00:22:40.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:40.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 issued rwts: total=1536,1927,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:40.442 job1: (groupid=0, jobs=1): err= 0: pid=76015: Fri Apr 26 13:32:57 2024 00:22:40.442 read: IOPS=1060, BW=4244KiB/s (4346kB/s)(4248KiB/1001msec) 00:22:40.442 slat (nsec): min=9356, max=59141, avg=19966.61, stdev=7364.44 00:22:40.442 clat (usec): min=223, max=630, avg=470.35, stdev=46.15 00:22:40.442 lat (usec): min=248, max=655, avg=490.31, stdev=45.82 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 302], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 437], 00:22:40.442 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 486], 00:22:40.442 | 70.00th=[ 498], 80.00th=[ 510], 90.00th=[ 523], 95.00th=[ 537], 00:22:40.442 | 99.00th=[ 562], 99.50th=[ 570], 99.90th=[ 619], 99.95th=[ 627], 00:22:40.442 | 99.99th=[ 627] 00:22:40.442 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:40.442 slat (usec): min=14, max=110, avg=32.50, stdev= 8.28 00:22:40.442 clat (usec): min=126, max=434, avg=275.21, stdev=30.52 00:22:40.442 lat (usec): min=150, max=456, avg=307.71, stdev=29.24 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 239], 20.00th=[ 260], 00:22:40.442 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:22:40.442 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 322], 00:22:40.442 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 404], 99.95th=[ 437], 00:22:40.442 | 99.99th=[ 437] 00:22:40.442 bw ( KiB/s): min= 6432, max= 6432, per=19.94%, avg=6432.00, stdev= 0.00, samples=1 00:22:40.442 iops : min= 1608, max= 1608, avg=1608.00, stdev= 0.00, samples=1 00:22:40.442 lat (usec) : 250=7.35%, 500=80.72%, 750=11.93% 00:22:40.442 cpu : usr=2.20%, sys=5.10%, ctx=2598, majf=0, minf=12 00:22:40.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:40.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 issued rwts: total=1062,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:40.442 job2: (groupid=0, jobs=1): err= 0: pid=76018: Fri Apr 26 13:32:57 2024 00:22:40.442 read: IOPS=2844, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:22:40.442 slat (nsec): min=13666, max=44417, avg=16617.34, stdev=3012.50 00:22:40.442 clat (usec): min=147, max=321, avg=167.49, stdev=10.57 00:22:40.442 lat (usec): min=162, max=335, avg=184.10, stdev=11.00 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:22:40.442 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:22:40.442 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 180], 95.00th=[ 184], 00:22:40.442 | 99.00th=[ 196], 99.50th=[ 219], 99.90th=[ 245], 99.95th=[ 265], 00:22:40.442 | 99.99th=[ 322] 00:22:40.442 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:22:40.442 slat (nsec): min=18934, max=96029, avg=24207.96, stdev=5419.26 00:22:40.442 clat (usec): min=101, max=516, avg=127.21, stdev=13.96 00:22:40.442 lat (usec): min=129, max=538, avg=151.42, stdev=15.26 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 113], 5.00th=[ 116], 10.00th=[ 118], 20.00th=[ 120], 00:22:40.442 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:22:40.442 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:22:40.442 | 99.00th=[ 165], 99.50th=[ 184], 99.90th=[ 293], 99.95th=[ 338], 00:22:40.442 | 99.99th=[ 519] 00:22:40.442 bw ( KiB/s): min=12288, max=12288, per=38.10%, avg=12288.00, stdev= 0.00, samples=1 00:22:40.442 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:22:40.442 lat (usec) : 250=99.85%, 500=0.14%, 750=0.02% 00:22:40.442 cpu : usr=1.60%, sys=9.70%, ctx=5921, majf=0, minf=5 00:22:40.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:40.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 issued rwts: total=2847,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:40.442 job3: (groupid=0, jobs=1): err= 0: pid=76019: Fri Apr 26 13:32:57 2024 00:22:40.442 read: IOPS=1060, BW=4244KiB/s (4346kB/s)(4248KiB/1001msec) 00:22:40.442 slat (nsec): min=11151, max=59754, avg=22104.41, stdev=6347.50 00:22:40.442 clat (usec): min=250, max=591, avg=467.98, stdev=44.17 00:22:40.442 lat (usec): min=278, max=616, avg=490.09, stdev=43.98 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 310], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 433], 00:22:40.442 | 30.00th=[ 445], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 482], 00:22:40.442 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 537], 00:22:40.442 | 99.00th=[ 562], 99.50th=[ 562], 99.90th=[ 570], 99.95th=[ 594], 00:22:40.442 | 99.99th=[ 594] 00:22:40.442 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:40.442 slat (usec): min=15, max=107, avg=27.63, stdev=11.02 00:22:40.442 clat (usec): min=130, max=381, avg=280.32, stdev=31.27 00:22:40.442 lat (usec): min=153, max=482, avg=307.95, stdev=29.68 00:22:40.442 clat percentiles (usec): 00:22:40.442 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 239], 20.00th=[ 265], 00:22:40.442 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:22:40.442 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 322], 00:22:40.442 | 99.00th=[ 351], 99.50th=[ 371], 99.90th=[ 375], 99.95th=[ 383], 00:22:40.442 | 99.99th=[ 383] 00:22:40.442 bw ( KiB/s): min= 6440, max= 6440, per=19.97%, avg=6440.00, stdev= 0.00, samples=1 00:22:40.442 iops : min= 1610, max= 1610, avg=1610.00, stdev= 0.00, samples=1 00:22:40.442 lat (usec) : 250=6.58%, 500=82.79%, 750=10.62% 00:22:40.442 cpu : usr=1.80%, sys=5.10%, ctx=2598, majf=0, minf=11 00:22:40.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:40.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:40.442 issued rwts: total=1062,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:40.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:40.442 00:22:40.442 Run status group 0 (all jobs): 00:22:40.442 READ: bw=25.4MiB/s (26.6MB/s), 4244KiB/s-11.1MiB/s (4346kB/s-11.6MB/s), io=25.4MiB (26.7MB), run=1001-1001msec 00:22:40.442 WRITE: bw=31.5MiB/s (33.0MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=31.5MiB (33.1MB), run=1001-1001msec 00:22:40.442 00:22:40.442 Disk stats (read/write): 00:22:40.442 nvme0n1: ios=1465/1536, merge=0/0, ticks=451/372, in_queue=823, util=86.86% 00:22:40.442 nvme0n2: ios=1039/1100, merge=0/0, ticks=472/316, in_queue=788, util=87.13% 00:22:40.442 nvme0n3: ios=2440/2560, merge=0/0, ticks=410/354, in_queue=764, util=89.10% 00:22:40.442 nvme0n4: ios=1024/1101, merge=0/0, ticks=461/298, in_queue=759, util=89.66% 00:22:40.443 13:32:57 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:22:40.443 [global] 00:22:40.443 thread=1 00:22:40.443 invalidate=1 00:22:40.443 rw=randwrite 00:22:40.443 time_based=1 00:22:40.443 runtime=1 00:22:40.443 ioengine=libaio 00:22:40.443 direct=1 00:22:40.443 bs=4096 00:22:40.443 iodepth=1 00:22:40.443 norandommap=0 00:22:40.443 numjobs=1 00:22:40.443 00:22:40.443 verify_dump=1 00:22:40.443 verify_backlog=512 00:22:40.443 verify_state_save=0 00:22:40.443 do_verify=1 00:22:40.443 verify=crc32c-intel 00:22:40.443 [job0] 00:22:40.443 filename=/dev/nvme0n1 00:22:40.443 [job1] 00:22:40.443 filename=/dev/nvme0n2 00:22:40.443 [job2] 00:22:40.443 filename=/dev/nvme0n3 00:22:40.443 [job3] 00:22:40.443 filename=/dev/nvme0n4 00:22:40.443 Could not set queue depth (nvme0n1) 00:22:40.443 Could not set queue depth (nvme0n2) 00:22:40.443 Could not set queue depth (nvme0n3) 00:22:40.443 Could not set queue depth (nvme0n4) 00:22:40.701 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:40.701 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:40.701 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:40.701 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:40.701 fio-3.35 00:22:40.701 Starting 4 threads 00:22:42.078 00:22:42.078 job0: (groupid=0, jobs=1): err= 0: pid=76073: Fri Apr 26 13:32:59 2024 00:22:42.078 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:22:42.078 slat (usec): min=11, max=170, avg=17.42, stdev= 5.79 00:22:42.078 clat (usec): min=115, max=1441, avg=181.36, stdev=67.77 00:22:42.078 lat (usec): min=154, max=1488, avg=198.78, stdev=68.64 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:22:42.078 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:22:42.078 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 253], 95.00th=[ 281], 00:22:42.078 | 99.00th=[ 388], 99.50th=[ 586], 99.90th=[ 1074], 99.95th=[ 1188], 00:22:42.078 | 99.99th=[ 1450] 00:22:42.078 write: IOPS=2828, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:22:42.078 slat (usec): min=19, max=107, avg=24.19, stdev= 5.52 00:22:42.078 clat (usec): min=98, max=535, avg=145.76, stdev=40.70 00:22:42.078 lat (usec): min=120, max=561, avg=169.95, stdev=42.61 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:22:42.078 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 133], 00:22:42.078 | 70.00th=[ 143], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 219], 00:22:42.078 | 99.00th=[ 239], 99.50th=[ 253], 99.90th=[ 400], 99.95th=[ 404], 00:22:42.078 | 99.99th=[ 537] 00:22:42.078 bw ( KiB/s): min=12263, max=12263, per=31.60%, avg=12263.00, stdev= 0.00, samples=1 00:22:42.078 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:22:42.078 lat (usec) : 100=0.09%, 250=94.68%, 500=4.93%, 750=0.19%, 1000=0.04% 00:22:42.078 lat (msec) : 2=0.07% 00:22:42.078 cpu : usr=2.00%, sys=8.60%, ctx=5409, majf=0, minf=15 00:22:42.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:42.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 issued rwts: total=2560,2831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:42.078 job1: (groupid=0, jobs=1): err= 0: pid=76074: Fri Apr 26 13:32:59 2024 00:22:42.078 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:22:42.078 slat (usec): min=8, max=152, avg=14.05, stdev= 8.78 00:22:42.078 clat (usec): min=128, max=40529, avg=327.06, stdev=1051.04 00:22:42.078 lat (usec): min=197, max=40539, avg=341.11, stdev=1050.97 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 233], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 258], 00:22:42.078 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:22:42.078 | 70.00th=[ 293], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 383], 00:22:42.078 | 99.00th=[ 429], 99.50th=[ 578], 99.90th=[ 7701], 99.95th=[40633], 00:22:42.078 | 99.99th=[40633] 00:22:42.078 write: IOPS=2001, BW=8008KiB/s (8200kB/s)(8016KiB/1001msec); 0 zone resets 00:22:42.078 slat (usec): min=10, max=1020, avg=22.85, stdev=23.16 00:22:42.078 clat (usec): min=83, max=509, avg=211.54, stdev=44.30 00:22:42.078 lat (usec): min=130, max=1103, avg=234.39, stdev=48.48 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 119], 5.00th=[ 130], 10.00th=[ 143], 20.00th=[ 186], 00:22:42.078 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 221], 00:22:42.078 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 273], 00:22:42.078 | 99.00th=[ 326], 99.50th=[ 412], 99.90th=[ 478], 99.95th=[ 486], 00:22:42.078 | 99.99th=[ 510] 00:22:42.078 bw ( KiB/s): min= 8192, max= 8192, per=21.11%, avg=8192.00, stdev= 0.00, samples=1 00:22:42.078 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:22:42.078 lat (usec) : 100=0.03%, 250=52.82%, 500=46.75%, 750=0.23% 00:22:42.078 lat (msec) : 2=0.03%, 4=0.08%, 10=0.03%, 50=0.03% 00:22:42.078 cpu : usr=1.60%, sys=4.90%, ctx=3560, majf=0, minf=12 00:22:42.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:42.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 issued rwts: total=1536,2004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:42.078 job2: (groupid=0, jobs=1): err= 0: pid=76075: Fri Apr 26 13:32:59 2024 00:22:42.078 read: IOPS=1808, BW=7233KiB/s (7406kB/s)(7240KiB/1001msec) 00:22:42.078 slat (nsec): min=9290, max=55343, avg=15231.76, stdev=2931.61 00:22:42.078 clat (usec): min=154, max=40501, avg=284.31, stdev=947.42 00:22:42.078 lat (usec): min=169, max=40516, avg=299.55, stdev=947.41 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 235], 00:22:42.078 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:22:42.078 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 343], 95.00th=[ 355], 00:22:42.078 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 562], 99.95th=[40633], 00:22:42.078 | 99.99th=[40633] 00:22:42.078 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:22:42.078 slat (usec): min=11, max=3191, avg=25.30, stdev=82.00 00:22:42.078 clat (usec): min=3, max=7593, avg=194.81, stdev=181.97 00:22:42.078 lat (usec): min=139, max=7613, avg=220.11, stdev=196.25 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 129], 20.00th=[ 135], 00:22:42.078 | 30.00th=[ 141], 40.00th=[ 149], 50.00th=[ 178], 60.00th=[ 212], 00:22:42.078 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:22:42.078 | 99.00th=[ 310], 99.50th=[ 383], 99.90th=[ 1450], 99.95th=[ 1827], 00:22:42.078 | 99.99th=[ 7570] 00:22:42.078 bw ( KiB/s): min= 8175, max= 8175, per=21.06%, avg=8175.00, stdev= 0.00, samples=1 00:22:42.078 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:22:42.078 lat (usec) : 4=0.08%, 100=0.03%, 250=57.44%, 500=42.22%, 750=0.05% 00:22:42.078 lat (usec) : 1000=0.03% 00:22:42.078 lat (msec) : 2=0.10%, 10=0.03%, 50=0.03% 00:22:42.078 cpu : usr=1.60%, sys=5.80%, ctx=3884, majf=0, minf=11 00:22:42.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:42.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 issued rwts: total=1810,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:42.078 job3: (groupid=0, jobs=1): err= 0: pid=76076: Fri Apr 26 13:32:59 2024 00:22:42.078 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:22:42.078 slat (usec): min=11, max=115, avg=18.78, stdev= 7.16 00:22:42.078 clat (usec): min=144, max=554, avg=181.80, stdev=28.13 00:22:42.078 lat (usec): min=167, max=571, avg=200.59, stdev=29.25 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:22:42.078 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:22:42.078 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 241], 00:22:42.078 | 99.00th=[ 306], 99.50th=[ 355], 99.90th=[ 375], 99.95th=[ 383], 00:22:42.078 | 99.99th=[ 553] 00:22:42.078 write: IOPS=2826, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:22:42.078 slat (usec): min=11, max=114, avg=23.50, stdev= 6.15 00:22:42.078 clat (usec): min=108, max=1517, avg=144.67, stdev=42.69 00:22:42.078 lat (usec): min=133, max=1547, avg=168.17, stdev=42.37 00:22:42.078 clat percentiles (usec): 00:22:42.078 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:22:42.078 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:22:42.078 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 251], 00:22:42.078 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 404], 99.95th=[ 594], 00:22:42.078 | 99.99th=[ 1516] 00:22:42.078 bw ( KiB/s): min=12263, max=12263, per=31.60%, avg=12263.00, stdev= 0.00, samples=1 00:22:42.078 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:22:42.078 lat (usec) : 250=95.05%, 500=4.90%, 750=0.04% 00:22:42.078 lat (msec) : 2=0.02% 00:22:42.078 cpu : usr=1.90%, sys=8.90%, ctx=5389, majf=0, minf=7 00:22:42.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:42.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:42.078 issued rwts: total=2560,2829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:42.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:42.078 00:22:42.078 Run status group 0 (all jobs): 00:22:42.078 READ: bw=33.0MiB/s (34.6MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=33.1MiB (34.7MB), run=1001-1001msec 00:22:42.078 WRITE: bw=37.9MiB/s (39.7MB/s), 8008KiB/s-11.0MiB/s (8200kB/s-11.6MB/s), io=37.9MiB (39.8MB), run=1001-1001msec 00:22:42.078 00:22:42.078 Disk stats (read/write): 00:22:42.078 nvme0n1: ios=2332/2560, merge=0/0, ticks=428/394, in_queue=822, util=88.16% 00:22:42.078 nvme0n2: ios=1481/1536, merge=0/0, ticks=477/331, in_queue=808, util=87.84% 00:22:42.078 nvme0n3: ios=1536/1784, merge=0/0, ticks=445/351, in_queue=796, util=89.03% 00:22:42.078 nvme0n4: ios=2257/2560, merge=0/0, ticks=406/371, in_queue=777, util=89.80% 00:22:42.078 13:32:59 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:22:42.078 [global] 00:22:42.078 thread=1 00:22:42.078 invalidate=1 00:22:42.078 rw=write 00:22:42.078 time_based=1 00:22:42.078 runtime=1 00:22:42.078 ioengine=libaio 00:22:42.078 direct=1 00:22:42.078 bs=4096 00:22:42.078 iodepth=128 00:22:42.079 norandommap=0 00:22:42.079 numjobs=1 00:22:42.079 00:22:42.079 verify_dump=1 00:22:42.079 verify_backlog=512 00:22:42.079 verify_state_save=0 00:22:42.079 do_verify=1 00:22:42.079 verify=crc32c-intel 00:22:42.079 [job0] 00:22:42.079 filename=/dev/nvme0n1 00:22:42.079 [job1] 00:22:42.079 filename=/dev/nvme0n2 00:22:42.079 [job2] 00:22:42.079 filename=/dev/nvme0n3 00:22:42.079 [job3] 00:22:42.079 filename=/dev/nvme0n4 00:22:42.079 Could not set queue depth (nvme0n1) 00:22:42.079 Could not set queue depth (nvme0n2) 00:22:42.079 Could not set queue depth (nvme0n3) 00:22:42.079 Could not set queue depth (nvme0n4) 00:22:42.079 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:42.079 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:42.079 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:42.079 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:42.079 fio-3.35 00:22:42.079 Starting 4 threads 00:22:43.465 00:22:43.465 job0: (groupid=0, jobs=1): err= 0: pid=76137: Fri Apr 26 13:33:00 2024 00:22:43.465 read: IOPS=2506, BW=9.79MiB/s (10.3MB/s)(9.88MiB/1009msec) 00:22:43.465 slat (usec): min=8, max=15112, avg=191.51, stdev=1042.62 00:22:43.465 clat (usec): min=1974, max=55010, avg=25331.80, stdev=8046.55 00:22:43.465 lat (usec): min=8770, max=55026, avg=25523.31, stdev=8022.75 00:22:43.465 clat percentiles (usec): 00:22:43.465 | 1.00th=[ 9372], 5.00th=[19268], 10.00th=[19792], 20.00th=[20317], 00:22:43.465 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[22414], 00:22:43.465 | 70.00th=[26084], 80.00th=[32113], 90.00th=[35914], 95.00th=[43254], 00:22:43.465 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:22:43.465 | 99.99th=[54789] 00:22:43.465 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:22:43.465 slat (usec): min=14, max=12691, avg=194.17, stdev=1010.29 00:22:43.465 clat (usec): min=11083, max=34759, avg=24373.16, stdev=7335.71 00:22:43.465 lat (usec): min=13842, max=37658, avg=24567.33, stdev=7331.97 00:22:43.465 clat percentiles (usec): 00:22:43.465 | 1.00th=[13829], 5.00th=[14615], 10.00th=[15139], 20.00th=[16319], 00:22:43.465 | 30.00th=[17171], 40.00th=[20055], 50.00th=[25035], 60.00th=[28705], 00:22:43.465 | 70.00th=[31327], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:22:43.465 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:22:43.465 | 99.99th=[34866] 00:22:43.465 bw ( KiB/s): min= 9208, max=11272, per=17.39%, avg=10240.00, stdev=1459.47, samples=2 00:22:43.465 iops : min= 2302, max= 2818, avg=2560.00, stdev=364.87, samples=2 00:22:43.465 lat (msec) : 2=0.02%, 10=0.63%, 20=26.31%, 50=72.43%, 100=0.61% 00:22:43.465 cpu : usr=3.17%, sys=7.84%, ctx=161, majf=0, minf=9 00:22:43.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:43.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:43.465 issued rwts: total=2529,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.465 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:43.465 job1: (groupid=0, jobs=1): err= 0: pid=76138: Fri Apr 26 13:33:00 2024 00:22:43.465 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:22:43.465 slat (usec): min=3, max=9142, avg=157.61, stdev=737.01 00:22:43.465 clat (usec): min=10807, max=48534, avg=19593.80, stdev=5413.76 00:22:43.465 lat (usec): min=10832, max=49400, avg=19751.42, stdev=5480.24 00:22:43.465 clat percentiles (usec): 00:22:43.465 | 1.00th=[13698], 5.00th=[14877], 10.00th=[15533], 20.00th=[16319], 00:22:43.465 | 30.00th=[16581], 40.00th=[16909], 50.00th=[16909], 60.00th=[17433], 00:22:43.465 | 70.00th=[19530], 80.00th=[23462], 90.00th=[27919], 95.00th=[31327], 00:22:43.465 | 99.00th=[36439], 99.50th=[43779], 99.90th=[46400], 99.95th=[48497], 00:22:43.465 | 99.99th=[48497] 00:22:43.465 write: IOPS=3018, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1009msec); 0 zone resets 00:22:43.465 slat (usec): min=11, max=11088, avg=186.75, stdev=703.41 00:22:43.465 clat (usec): min=8282, max=59592, avg=25360.66, stdev=8229.91 00:22:43.465 lat (usec): min=8303, max=59637, avg=25547.41, stdev=8279.43 00:22:43.465 clat percentiles (usec): 00:22:43.465 | 1.00th=[11994], 5.00th=[13829], 10.00th=[16057], 20.00th=[18744], 00:22:43.465 | 30.00th=[22676], 40.00th=[23462], 50.00th=[25297], 60.00th=[25560], 00:22:43.466 | 70.00th=[26084], 80.00th=[28705], 90.00th=[34866], 95.00th=[43254], 00:22:43.466 | 99.00th=[54264], 99.50th=[55837], 99.90th=[56886], 99.95th=[57410], 00:22:43.466 | 99.99th=[59507] 00:22:43.466 bw ( KiB/s): min=10752, max=12600, per=19.83%, avg=11676.00, stdev=1306.73, samples=2 00:22:43.466 iops : min= 2688, max= 3150, avg=2919.00, stdev=326.68, samples=2 00:22:43.466 lat (msec) : 10=0.20%, 20=44.33%, 50=54.08%, 100=1.39% 00:22:43.466 cpu : usr=3.47%, sys=9.62%, ctx=553, majf=0, minf=2 00:22:43.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:43.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:43.466 issued rwts: total=2560,3046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:43.466 job2: (groupid=0, jobs=1): err= 0: pid=76139: Fri Apr 26 13:33:00 2024 00:22:43.466 read: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1009msec) 00:22:43.466 slat (usec): min=3, max=13948, avg=129.63, stdev=640.89 00:22:43.466 clat (usec): min=8559, max=45390, avg=16092.80, stdev=7194.46 00:22:43.466 lat (usec): min=8569, max=46688, avg=16222.43, stdev=7272.64 00:22:43.466 clat percentiles (usec): 00:22:43.466 | 1.00th=[ 9241], 5.00th=[10683], 10.00th=[11600], 20.00th=[12125], 00:22:43.466 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[13698], 00:22:43.466 | 70.00th=[14615], 80.00th=[16712], 90.00th=[29230], 95.00th=[34341], 00:22:43.466 | 99.00th=[39584], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:22:43.466 | 99.99th=[45351] 00:22:43.466 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:22:43.466 slat (usec): min=10, max=7933, avg=115.96, stdev=532.44 00:22:43.466 clat (usec): min=7729, max=61077, avg=16242.61, stdev=9561.42 00:22:43.466 lat (usec): min=7747, max=61125, avg=16358.56, stdev=9617.88 00:22:43.466 clat percentiles (usec): 00:22:43.466 | 1.00th=[ 8455], 5.00th=[10552], 10.00th=[11731], 20.00th=[12256], 00:22:43.466 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[13173], 00:22:43.466 | 70.00th=[13304], 80.00th=[14222], 90.00th=[28443], 95.00th=[43254], 00:22:43.466 | 99.00th=[52691], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:22:43.466 | 99.99th=[61080] 00:22:43.466 bw ( KiB/s): min=12288, max=20480, per=27.82%, avg=16384.00, stdev=5792.62, samples=2 00:22:43.466 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:22:43.466 lat (msec) : 10=3.08%, 20=80.89%, 50=15.15%, 100=0.89% 00:22:43.466 cpu : usr=4.66%, sys=10.52%, ctx=630, majf=0, minf=1 00:22:43.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:43.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:43.466 issued rwts: total=3794,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:43.466 job3: (groupid=0, jobs=1): err= 0: pid=76140: Fri Apr 26 13:33:00 2024 00:22:43.466 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:22:43.466 slat (usec): min=4, max=3162, avg=92.25, stdev=426.25 00:22:43.466 clat (usec): min=5181, max=14883, avg=12546.91, stdev=1461.02 00:22:43.466 lat (usec): min=5221, max=16809, avg=12639.16, stdev=1415.79 00:22:43.466 clat percentiles (usec): 00:22:43.466 | 1.00th=[ 8356], 5.00th=[10552], 10.00th=[11076], 20.00th=[11207], 00:22:43.466 | 30.00th=[11469], 40.00th=[11731], 50.00th=[13173], 60.00th=[13435], 00:22:43.466 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14222], 00:22:43.466 | 99.00th=[14615], 99.50th=[14746], 99.90th=[14877], 99.95th=[14877], 00:22:43.466 | 99.99th=[14877] 00:22:43.466 write: IOPS=5142, BW=20.1MiB/s (21.1MB/s)(20.1MiB/1002msec); 0 zone resets 00:22:43.466 slat (usec): min=10, max=3118, avg=94.57, stdev=412.25 00:22:43.466 clat (usec): min=265, max=15239, avg=12075.85, stdev=1808.13 00:22:43.466 lat (usec): min=2311, max=15318, avg=12170.41, stdev=1808.95 00:22:43.466 clat percentiles (usec): 00:22:43.466 | 1.00th=[ 9110], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10945], 00:22:43.466 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:22:43.466 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14353], 95.00th=[14484], 00:22:43.466 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15139], 99.95th=[15139], 00:22:43.466 | 99.99th=[15270] 00:22:43.466 bw ( KiB/s): min=20439, max=20439, per=34.71%, avg=20439.00, stdev= 0.00, samples=1 00:22:43.466 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:22:43.466 lat (usec) : 500=0.01% 00:22:43.466 lat (msec) : 4=0.31%, 10=9.90%, 20=89.78% 00:22:43.466 cpu : usr=4.70%, sys=14.09%, ctx=534, majf=0, minf=1 00:22:43.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:43.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:43.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:43.466 issued rwts: total=5120,5153,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:43.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:43.466 00:22:43.466 Run status group 0 (all jobs): 00:22:43.466 READ: bw=54.2MiB/s (56.8MB/s), 9.79MiB/s-20.0MiB/s (10.3MB/s-20.9MB/s), io=54.7MiB (57.4MB), run=1002-1009msec 00:22:43.466 WRITE: bw=57.5MiB/s (60.3MB/s), 9.91MiB/s-20.1MiB/s (10.4MB/s-21.1MB/s), io=58.0MiB (60.8MB), run=1002-1009msec 00:22:43.466 00:22:43.466 Disk stats (read/write): 00:22:43.466 nvme0n1: ios=2098/2080, merge=0/0, ticks=12382/12363, in_queue=24745, util=87.58% 00:22:43.466 nvme0n2: ios=2552/2560, merge=0/0, ticks=21228/26379, in_queue=47607, util=88.74% 00:22:43.466 nvme0n3: ios=3584/3825, merge=0/0, ticks=24226/21567, in_queue=45793, util=88.99% 00:22:43.466 nvme0n4: ios=4096/4502, merge=0/0, ticks=12063/12157, in_queue=24220, util=89.66% 00:22:43.466 13:33:00 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:22:43.466 [global] 00:22:43.466 thread=1 00:22:43.466 invalidate=1 00:22:43.466 rw=randwrite 00:22:43.466 time_based=1 00:22:43.466 runtime=1 00:22:43.466 ioengine=libaio 00:22:43.466 direct=1 00:22:43.466 bs=4096 00:22:43.466 iodepth=128 00:22:43.466 norandommap=0 00:22:43.466 numjobs=1 00:22:43.466 00:22:43.466 verify_dump=1 00:22:43.466 verify_backlog=512 00:22:43.466 verify_state_save=0 00:22:43.466 do_verify=1 00:22:43.466 verify=crc32c-intel 00:22:43.466 [job0] 00:22:43.466 filename=/dev/nvme0n1 00:22:43.466 [job1] 00:22:43.466 filename=/dev/nvme0n2 00:22:43.466 [job2] 00:22:43.466 filename=/dev/nvme0n3 00:22:43.466 [job3] 00:22:43.466 filename=/dev/nvme0n4 00:22:43.466 Could not set queue depth (nvme0n1) 00:22:43.466 Could not set queue depth (nvme0n2) 00:22:43.466 Could not set queue depth (nvme0n3) 00:22:43.466 Could not set queue depth (nvme0n4) 00:22:43.466 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:43.466 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:43.466 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:43.466 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:43.466 fio-3.35 00:22:43.466 Starting 4 threads 00:22:44.839 00:22:44.839 job0: (groupid=0, jobs=1): err= 0: pid=76194: Fri Apr 26 13:33:01 2024 00:22:44.839 read: IOPS=2546, BW=9.95MiB/s (10.4MB/s)(10.1MiB/1017msec) 00:22:44.839 slat (usec): min=6, max=19440, avg=135.53, stdev=907.05 00:22:44.840 clat (usec): min=5551, max=39892, avg=16554.66, stdev=6505.22 00:22:44.840 lat (usec): min=5564, max=39911, avg=16690.19, stdev=6553.98 00:22:44.840 clat percentiles (usec): 00:22:44.840 | 1.00th=[ 5932], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10552], 00:22:44.840 | 30.00th=[11338], 40.00th=[12387], 50.00th=[15664], 60.00th=[19006], 00:22:44.840 | 70.00th=[20317], 80.00th=[22152], 90.00th=[24511], 95.00th=[28181], 00:22:44.840 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:22:44.840 | 99.99th=[40109] 00:22:44.840 write: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec); 0 zone resets 00:22:44.840 slat (usec): min=6, max=19980, avg=202.84, stdev=1043.17 00:22:44.840 clat (usec): min=4900, max=98095, avg=27960.71, stdev=16452.58 00:22:44.840 lat (usec): min=4923, max=98107, avg=28163.56, stdev=16528.95 00:22:44.840 clat percentiles (usec): 00:22:44.840 | 1.00th=[ 5342], 5.00th=[ 9110], 10.00th=[18744], 20.00th=[20579], 00:22:44.840 | 30.00th=[21103], 40.00th=[21365], 50.00th=[21627], 60.00th=[22414], 00:22:44.840 | 70.00th=[23200], 80.00th=[34341], 90.00th=[57410], 95.00th=[60556], 00:22:44.840 | 99.00th=[94897], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:22:44.840 | 99.99th=[98042] 00:22:44.840 bw ( KiB/s): min=11776, max=12040, per=23.74%, avg=11908.00, stdev=186.68, samples=2 00:22:44.840 iops : min= 2944, max= 3010, avg=2977.00, stdev=46.67, samples=2 00:22:44.840 lat (msec) : 10=10.74%, 20=26.76%, 50=55.42%, 100=7.08% 00:22:44.840 cpu : usr=2.95%, sys=7.78%, ctx=426, majf=0, minf=6 00:22:44.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:44.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.840 issued rwts: total=2590,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.840 job1: (groupid=0, jobs=1): err= 0: pid=76195: Fri Apr 26 13:33:01 2024 00:22:44.840 read: IOPS=2857, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1007msec) 00:22:44.840 slat (usec): min=6, max=21950, avg=195.53, stdev=1270.93 00:22:44.840 clat (msec): min=4, max=122, avg=21.22, stdev=16.92 00:22:44.840 lat (msec): min=4, max=122, avg=21.42, stdev=17.07 00:22:44.840 clat percentiles (msec): 00:22:44.840 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:22:44.840 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 21], 00:22:44.840 | 70.00th=[ 23], 80.00th=[ 28], 90.00th=[ 39], 95.00th=[ 51], 00:22:44.840 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 124], 99.95th=[ 124], 00:22:44.840 | 99.99th=[ 124] 00:22:44.840 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:22:44.840 slat (usec): min=5, max=17006, avg=134.19, stdev=741.45 00:22:44.840 clat (msec): min=3, max=122, avg=21.63, stdev=14.97 00:22:44.840 lat (msec): min=3, max=122, avg=21.77, stdev=15.03 00:22:44.840 clat percentiles (msec): 00:22:44.840 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:22:44.840 | 30.00th=[ 17], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 22], 00:22:44.840 | 70.00th=[ 23], 80.00th=[ 23], 90.00th=[ 31], 95.00th=[ 56], 00:22:44.840 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 110], 99.95th=[ 124], 00:22:44.840 | 99.99th=[ 124] 00:22:44.840 bw ( KiB/s): min=12288, max=12288, per=24.50%, avg=12288.00, stdev= 0.00, samples=2 00:22:44.840 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:22:44.840 lat (msec) : 4=0.20%, 10=14.94%, 20=34.11%, 50=45.05%, 100=4.50% 00:22:44.840 lat (msec) : 250=1.19% 00:22:44.840 cpu : usr=3.48%, sys=7.65%, ctx=364, majf=0, minf=9 00:22:44.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:44.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.840 issued rwts: total=2877,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.840 job2: (groupid=0, jobs=1): err= 0: pid=76196: Fri Apr 26 13:33:01 2024 00:22:44.840 read: IOPS=3017, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1018msec) 00:22:44.840 slat (usec): min=6, max=22107, avg=170.48, stdev=1066.94 00:22:44.840 clat (usec): min=5552, max=65570, avg=18024.98, stdev=10038.30 00:22:44.840 lat (usec): min=5565, max=65587, avg=18195.47, stdev=10168.86 00:22:44.840 clat percentiles (usec): 00:22:44.840 | 1.00th=[ 6128], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11731], 00:22:44.840 | 30.00th=[12125], 40.00th=[12780], 50.00th=[14091], 60.00th=[16188], 00:22:44.840 | 70.00th=[20317], 80.00th=[21365], 90.00th=[30016], 95.00th=[40633], 00:22:44.840 | 99.00th=[57410], 99.50th=[57934], 99.90th=[65274], 99.95th=[65799], 00:22:44.840 | 99.99th=[65799] 00:22:44.840 write: IOPS=3432, BW=13.4MiB/s (14.1MB/s)(13.6MiB/1018msec); 0 zone resets 00:22:44.840 slat (usec): min=6, max=17209, avg=128.08, stdev=680.23 00:22:44.840 clat (usec): min=3349, max=65452, avg=21059.52, stdev=10899.98 00:22:44.840 lat (usec): min=3373, max=65464, avg=21187.60, stdev=10945.61 00:22:44.840 clat percentiles (usec): 00:22:44.840 | 1.00th=[ 5276], 5.00th=[ 8586], 10.00th=[10552], 20.00th=[11207], 00:22:44.840 | 30.00th=[16909], 40.00th=[20317], 50.00th=[21365], 60.00th=[21627], 00:22:44.840 | 70.00th=[21890], 80.00th=[22676], 90.00th=[32113], 95.00th=[53216], 00:22:44.840 | 99.00th=[55837], 99.50th=[55837], 99.90th=[64226], 99.95th=[65274], 00:22:44.840 | 99.99th=[65274] 00:22:44.840 bw ( KiB/s): min=11776, max=15182, per=26.87%, avg=13479.00, stdev=2408.41, samples=2 00:22:44.840 iops : min= 2944, max= 3795, avg=3369.50, stdev=601.75, samples=2 00:22:44.840 lat (msec) : 4=0.09%, 10=7.95%, 20=44.70%, 50=42.92%, 100=4.34% 00:22:44.840 cpu : usr=4.03%, sys=7.96%, ctx=442, majf=0, minf=3 00:22:44.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:44.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.840 issued rwts: total=3072,3494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.840 job3: (groupid=0, jobs=1): err= 0: pid=76197: Fri Apr 26 13:33:01 2024 00:22:44.840 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:22:44.840 slat (usec): min=6, max=19733, avg=139.59, stdev=952.74 00:22:44.840 clat (usec): min=5805, max=39981, avg=17770.60, stdev=5906.64 00:22:44.840 lat (usec): min=5821, max=39999, avg=17910.19, stdev=5955.82 00:22:44.840 clat percentiles (usec): 00:22:44.840 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[10421], 20.00th=[12125], 00:22:44.840 | 30.00th=[13304], 40.00th=[15139], 50.00th=[18482], 60.00th=[19268], 00:22:44.840 | 70.00th=[20841], 80.00th=[21890], 90.00th=[24511], 95.00th=[28705], 00:22:44.840 | 99.00th=[35914], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 00:22:44.840 | 99.99th=[40109] 00:22:44.840 write: IOPS=3101, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1009msec); 0 zone resets 00:22:44.840 slat (usec): min=5, max=19426, avg=173.83, stdev=973.62 00:22:44.840 clat (usec): min=4885, max=99982, avg=23409.27, stdev=15022.82 00:22:44.840 lat (usec): min=4908, max=99992, avg=23583.09, stdev=15112.83 00:22:44.840 clat percentiles (msec): 00:22:44.840 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 14], 00:22:44.840 | 30.00th=[ 21], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 22], 00:22:44.840 | 70.00th=[ 23], 80.00th=[ 23], 90.00th=[ 37], 95.00th=[ 55], 00:22:44.840 | 99.00th=[ 93], 99.50th=[ 99], 99.90th=[ 101], 99.95th=[ 101], 00:22:44.840 | 99.99th=[ 101] 00:22:44.840 bw ( KiB/s): min=12288, max=12312, per=24.52%, avg=12300.00, stdev=16.97, samples=2 00:22:44.840 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:22:44.840 lat (msec) : 10=3.79%, 20=44.12%, 50=48.77%, 100=3.32% 00:22:44.840 cpu : usr=2.88%, sys=8.83%, ctx=433, majf=0, minf=3 00:22:44.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:44.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:44.840 issued rwts: total=3072,3129,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:44.840 00:22:44.840 Run status group 0 (all jobs): 00:22:44.840 READ: bw=44.6MiB/s (46.7MB/s), 9.95MiB/s-11.9MiB/s (10.4MB/s-12.5MB/s), io=45.4MiB (47.6MB), run=1007-1018msec 00:22:44.840 WRITE: bw=49.0MiB/s (51.4MB/s), 11.8MiB/s-13.4MiB/s (12.4MB/s-14.1MB/s), io=49.9MiB (52.3MB), run=1007-1018msec 00:22:44.840 00:22:44.840 Disk stats (read/write): 00:22:44.840 nvme0n1: ios=2215/2560, merge=0/0, ticks=35995/68900, in_queue=104895, util=87.58% 00:22:44.840 nvme0n2: ios=2609/2959, merge=0/0, ticks=46119/56250, in_queue=102369, util=88.97% 00:22:44.840 nvme0n3: ios=2560/2967, merge=0/0, ticks=44249/58221, in_queue=102470, util=89.14% 00:22:44.840 nvme0n4: ios=2173/2560, merge=0/0, ticks=39307/65442, in_queue=104749, util=89.70% 00:22:44.840 13:33:01 -- target/fio.sh@55 -- # sync 00:22:44.840 13:33:01 -- target/fio.sh@59 -- # fio_pid=76210 00:22:44.840 13:33:01 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:22:44.840 13:33:01 -- target/fio.sh@61 -- # sleep 3 00:22:44.840 [global] 00:22:44.840 thread=1 00:22:44.840 invalidate=1 00:22:44.840 rw=read 00:22:44.840 time_based=1 00:22:44.840 runtime=10 00:22:44.840 ioengine=libaio 00:22:44.840 direct=1 00:22:44.840 bs=4096 00:22:44.840 iodepth=1 00:22:44.840 norandommap=1 00:22:44.840 numjobs=1 00:22:44.840 00:22:44.840 [job0] 00:22:44.840 filename=/dev/nvme0n1 00:22:44.840 [job1] 00:22:44.840 filename=/dev/nvme0n2 00:22:44.840 [job2] 00:22:44.840 filename=/dev/nvme0n3 00:22:44.840 [job3] 00:22:44.840 filename=/dev/nvme0n4 00:22:44.840 Could not set queue depth (nvme0n1) 00:22:44.840 Could not set queue depth (nvme0n2) 00:22:44.840 Could not set queue depth (nvme0n3) 00:22:44.840 Could not set queue depth (nvme0n4) 00:22:44.840 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:44.840 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:44.840 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:44.840 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:44.840 fio-3.35 00:22:44.840 Starting 4 threads 00:22:48.136 13:33:04 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:22:48.136 fio: pid=76253, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:48.136 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=36073472, buflen=4096 00:22:48.136 13:33:05 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:48.136 fio: pid=76252, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:48.136 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=54444032, buflen=4096 00:22:48.136 13:33:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:48.136 13:33:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:48.392 fio: pid=76250, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:48.392 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=42868736, buflen=4096 00:22:48.392 13:33:05 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:48.392 13:33:05 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:48.649 fio: pid=76251, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:48.649 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=5001216, buflen=4096 00:22:48.649 00:22:48.649 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76250: Fri Apr 26 13:33:05 2024 00:22:48.649 read: IOPS=3086, BW=12.1MiB/s (12.6MB/s)(40.9MiB/3391msec) 00:22:48.649 slat (usec): min=8, max=13939, avg=18.88, stdev=205.59 00:22:48.649 clat (usec): min=141, max=3703, avg=303.38, stdev=84.34 00:22:48.649 lat (usec): min=155, max=14162, avg=322.26, stdev=221.34 00:22:48.649 clat percentiles (usec): 00:22:48.650 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 192], 20.00th=[ 277], 00:22:48.650 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 314], 00:22:48.650 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 392], 00:22:48.650 | 99.00th=[ 453], 99.50th=[ 486], 99.90th=[ 603], 99.95th=[ 1057], 00:22:48.650 | 99.99th=[ 3589] 00:22:48.650 bw ( KiB/s): min=10832, max=15400, per=22.84%, avg=12338.67, stdev=1762.53, samples=6 00:22:48.650 iops : min= 2708, max= 3850, avg=3084.67, stdev=440.63, samples=6 00:22:48.650 lat (usec) : 250=14.19%, 500=85.43%, 750=0.32% 00:22:48.650 lat (msec) : 2=0.02%, 4=0.04% 00:22:48.650 cpu : usr=1.12%, sys=4.07%, ctx=10480, majf=0, minf=1 00:22:48.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:48.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 issued rwts: total=10467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:48.650 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76251: Fri Apr 26 13:33:05 2024 00:22:48.650 read: IOPS=4739, BW=18.5MiB/s (19.4MB/s)(68.8MiB/3715msec) 00:22:48.650 slat (usec): min=11, max=12402, avg=19.14, stdev=151.64 00:22:48.650 clat (usec): min=2, max=26256, avg=190.19, stdev=221.20 00:22:48.650 lat (usec): min=152, max=26269, avg=209.33, stdev=278.43 00:22:48.650 clat percentiles (usec): 00:22:48.650 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:22:48.650 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:22:48.650 | 70.00th=[ 186], 80.00th=[ 206], 90.00th=[ 247], 95.00th=[ 289], 00:22:48.650 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 857], 99.95th=[ 2040], 00:22:48.650 | 99.99th=[ 7701] 00:22:48.650 bw ( KiB/s): min=14006, max=22448, per=35.40%, avg=19122.00, stdev=3240.98, samples=7 00:22:48.650 iops : min= 3501, max= 5612, avg=4780.43, stdev=810.38, samples=7 00:22:48.650 lat (usec) : 4=0.01%, 50=0.01%, 250=90.83%, 500=8.95%, 750=0.09% 00:22:48.650 lat (usec) : 1000=0.03% 00:22:48.650 lat (msec) : 2=0.03%, 4=0.03%, 10=0.01%, 50=0.01% 00:22:48.650 cpu : usr=1.29%, sys=6.62%, ctx=17632, majf=0, minf=1 00:22:48.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:48.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 issued rwts: total=17606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:48.650 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76252: Fri Apr 26 13:33:05 2024 00:22:48.650 read: IOPS=4217, BW=16.5MiB/s (17.3MB/s)(51.9MiB/3152msec) 00:22:48.650 slat (usec): min=12, max=9805, avg=19.77, stdev=106.55 00:22:48.650 clat (usec): min=107, max=2416, avg=215.64, stdev=64.24 00:22:48.650 lat (usec): min=165, max=10080, avg=235.41, stdev=126.12 00:22:48.650 clat percentiles (usec): 00:22:48.650 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:22:48.650 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 231], 00:22:48.650 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 302], 00:22:48.650 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 594], 99.95th=[ 1418], 00:22:48.650 | 99.99th=[ 1663] 00:22:48.650 bw ( KiB/s): min=13096, max=21008, per=31.52%, avg=17028.00, stdev=4102.70, samples=6 00:22:48.650 iops : min= 3274, max= 5252, avg=4257.00, stdev=1025.68, samples=6 00:22:48.650 lat (usec) : 250=66.25%, 500=33.57%, 750=0.09%, 1000=0.03% 00:22:48.650 lat (msec) : 2=0.05%, 4=0.01% 00:22:48.650 cpu : usr=1.33%, sys=6.35%, ctx=13300, majf=0, minf=1 00:22:48.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:48.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 issued rwts: total=13293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:48.650 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76253: Fri Apr 26 13:33:05 2024 00:22:48.650 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(34.4MiB/2899msec) 00:22:48.650 slat (nsec): min=9185, max=91986, avg=15114.73, stdev=3820.83 00:22:48.650 clat (usec): min=150, max=7574, avg=312.20, stdev=122.55 00:22:48.650 lat (usec): min=170, max=7602, avg=327.32, stdev=122.31 00:22:48.650 clat percentiles (usec): 00:22:48.650 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 269], 20.00th=[ 285], 00:22:48.650 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 318], 00:22:48.650 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 392], 00:22:48.650 | 99.00th=[ 449], 99.50th=[ 469], 99.90th=[ 570], 99.95th=[ 3752], 00:22:48.650 | 99.99th=[ 7570] 00:22:48.650 bw ( KiB/s): min=10832, max=14312, per=22.91%, avg=12374.40, stdev=1452.80, samples=5 00:22:48.650 iops : min= 2708, max= 3578, avg=3093.60, stdev=363.20, samples=5 00:22:48.650 lat (usec) : 250=9.13%, 500=90.59%, 750=0.19%, 1000=0.02% 00:22:48.650 lat (msec) : 4=0.03%, 10=0.02% 00:22:48.650 cpu : usr=0.90%, sys=4.11%, ctx=8812, majf=0, minf=1 00:22:48.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:48.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:48.650 issued rwts: total=8808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:48.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:48.650 00:22:48.650 Run status group 0 (all jobs): 00:22:48.650 READ: bw=52.8MiB/s (55.3MB/s), 11.9MiB/s-18.5MiB/s (12.4MB/s-19.4MB/s), io=196MiB (205MB), run=2899-3715msec 00:22:48.650 00:22:48.650 Disk stats (read/write): 00:22:48.650 nvme0n1: ios=10385/0, merge=0/0, ticks=3130/0, in_queue=3130, util=95.25% 00:22:48.650 nvme0n2: ios=17124/0, merge=0/0, ticks=3330/0, in_queue=3330, util=95.61% 00:22:48.650 nvme0n3: ios=13166/0, merge=0/0, ticks=2898/0, in_queue=2898, util=96.36% 00:22:48.650 nvme0n4: ios=8727/0, merge=0/0, ticks=2702/0, in_queue=2702, util=96.62% 00:22:48.650 13:33:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:48.650 13:33:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:48.907 13:33:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:48.907 13:33:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:22:49.165 13:33:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:49.165 13:33:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:22:49.733 13:33:06 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:49.733 13:33:06 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:22:49.998 13:33:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:49.998 13:33:07 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:22:50.256 13:33:07 -- target/fio.sh@69 -- # fio_status=0 00:22:50.256 13:33:07 -- target/fio.sh@70 -- # wait 76210 00:22:50.256 13:33:07 -- target/fio.sh@70 -- # fio_status=4 00:22:50.256 13:33:07 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:50.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:50.256 13:33:07 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:50.256 13:33:07 -- common/autotest_common.sh@1205 -- # local i=0 00:22:50.256 13:33:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:22:50.256 13:33:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:50.256 13:33:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:22:50.256 13:33:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:50.515 nvmf hotplug test: fio failed as expected 00:22:50.515 13:33:07 -- common/autotest_common.sh@1217 -- # return 0 00:22:50.515 13:33:07 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:22:50.515 13:33:07 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:22:50.515 13:33:07 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.774 13:33:08 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:50.774 13:33:08 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:50.774 13:33:08 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:50.774 13:33:08 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:50.774 13:33:08 -- target/fio.sh@91 -- # nvmftestfini 00:22:50.774 13:33:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:50.774 13:33:08 -- nvmf/common.sh@117 -- # sync 00:22:50.774 13:33:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.774 13:33:08 -- nvmf/common.sh@120 -- # set +e 00:22:50.774 13:33:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.774 13:33:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.774 rmmod nvme_tcp 00:22:50.774 rmmod nvme_fabrics 00:22:50.774 rmmod nvme_keyring 00:22:50.774 13:33:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.774 13:33:08 -- nvmf/common.sh@124 -- # set -e 00:22:50.774 13:33:08 -- nvmf/common.sh@125 -- # return 0 00:22:50.774 13:33:08 -- nvmf/common.sh@478 -- # '[' -n 75724 ']' 00:22:50.774 13:33:08 -- nvmf/common.sh@479 -- # killprocess 75724 00:22:50.774 13:33:08 -- common/autotest_common.sh@936 -- # '[' -z 75724 ']' 00:22:50.774 13:33:08 -- common/autotest_common.sh@940 -- # kill -0 75724 00:22:50.774 13:33:08 -- common/autotest_common.sh@941 -- # uname 00:22:50.774 13:33:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:50.774 13:33:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75724 00:22:50.774 killing process with pid 75724 00:22:50.774 13:33:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:50.774 13:33:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:50.774 13:33:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75724' 00:22:50.774 13:33:08 -- common/autotest_common.sh@955 -- # kill 75724 00:22:50.774 13:33:08 -- common/autotest_common.sh@960 -- # wait 75724 00:22:51.032 13:33:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:51.032 13:33:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:51.032 13:33:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:51.032 13:33:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.032 13:33:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.032 13:33:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.032 13:33:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.032 13:33:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.032 13:33:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:51.032 ************************************ 00:22:51.032 END TEST nvmf_fio_target 00:22:51.032 ************************************ 00:22:51.032 00:22:51.032 real 0m19.874s 00:22:51.032 user 1m16.296s 00:22:51.032 sys 0m8.480s 00:22:51.032 13:33:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:51.032 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:22:51.032 13:33:08 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:51.032 13:33:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:51.032 13:33:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.032 13:33:08 -- common/autotest_common.sh@10 -- # set +x 00:22:51.291 ************************************ 00:22:51.291 START TEST nvmf_bdevio 00:22:51.291 ************************************ 00:22:51.291 13:33:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:51.291 * Looking for test storage... 00:22:51.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:51.292 13:33:08 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:51.292 13:33:08 -- nvmf/common.sh@7 -- # uname -s 00:22:51.292 13:33:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.292 13:33:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.292 13:33:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.292 13:33:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.292 13:33:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.292 13:33:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.292 13:33:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.292 13:33:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.292 13:33:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.292 13:33:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.292 13:33:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:51.292 13:33:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:51.292 13:33:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.292 13:33:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.292 13:33:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:51.292 13:33:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.292 13:33:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:51.292 13:33:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.292 13:33:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.292 13:33:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.292 13:33:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.292 13:33:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.292 13:33:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.292 13:33:08 -- paths/export.sh@5 -- # export PATH 00:22:51.292 13:33:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.292 13:33:08 -- nvmf/common.sh@47 -- # : 0 00:22:51.292 13:33:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.292 13:33:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.292 13:33:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.292 13:33:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.292 13:33:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.292 13:33:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.292 13:33:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.292 13:33:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.292 13:33:08 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:51.292 13:33:08 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:51.292 13:33:08 -- target/bdevio.sh@14 -- # nvmftestinit 00:22:51.292 13:33:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:51.292 13:33:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.292 13:33:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:51.292 13:33:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:51.292 13:33:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:51.292 13:33:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.292 13:33:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.292 13:33:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.292 13:33:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:51.292 13:33:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:51.292 13:33:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:51.292 13:33:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:51.292 13:33:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:51.292 13:33:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:51.292 13:33:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.292 13:33:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.292 13:33:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:51.292 13:33:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:51.292 13:33:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:51.292 13:33:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:51.292 13:33:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:51.292 13:33:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.292 13:33:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:51.292 13:33:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:51.292 13:33:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:51.292 13:33:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:51.292 13:33:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:51.292 13:33:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:51.292 Cannot find device "nvmf_tgt_br" 00:22:51.292 13:33:08 -- nvmf/common.sh@155 -- # true 00:22:51.292 13:33:08 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.292 Cannot find device "nvmf_tgt_br2" 00:22:51.292 13:33:08 -- nvmf/common.sh@156 -- # true 00:22:51.292 13:33:08 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:51.292 13:33:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:51.292 Cannot find device "nvmf_tgt_br" 00:22:51.292 13:33:08 -- nvmf/common.sh@158 -- # true 00:22:51.292 13:33:08 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:51.292 Cannot find device "nvmf_tgt_br2" 00:22:51.292 13:33:08 -- nvmf/common.sh@159 -- # true 00:22:51.292 13:33:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:51.551 13:33:08 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:51.551 13:33:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.551 13:33:08 -- nvmf/common.sh@162 -- # true 00:22:51.551 13:33:08 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.551 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.551 13:33:08 -- nvmf/common.sh@163 -- # true 00:22:51.551 13:33:08 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:51.551 13:33:08 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:51.551 13:33:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:51.551 13:33:08 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:51.551 13:33:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:51.551 13:33:08 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:51.551 13:33:08 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:51.551 13:33:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:51.551 13:33:08 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:51.551 13:33:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:51.551 13:33:08 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:51.551 13:33:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:51.551 13:33:08 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:51.551 13:33:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:51.551 13:33:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:51.551 13:33:08 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:51.551 13:33:08 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:51.551 13:33:08 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:51.551 13:33:08 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:51.551 13:33:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:51.551 13:33:08 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:51.551 13:33:08 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:51.551 13:33:08 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:51.551 13:33:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:51.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:22:51.551 00:22:51.551 --- 10.0.0.2 ping statistics --- 00:22:51.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.551 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:22:51.551 13:33:08 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:51.551 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:51.551 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:22:51.551 00:22:51.551 --- 10.0.0.3 ping statistics --- 00:22:51.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.551 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:51.551 13:33:08 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:51.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:51.551 00:22:51.551 --- 10.0.0.1 ping statistics --- 00:22:51.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.551 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:51.551 13:33:08 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.551 13:33:08 -- nvmf/common.sh@422 -- # return 0 00:22:51.551 13:33:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:51.551 13:33:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.551 13:33:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:51.551 13:33:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:51.551 13:33:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.551 13:33:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:51.551 13:33:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:51.810 13:33:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:51.810 13:33:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:51.810 13:33:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:51.810 13:33:09 -- common/autotest_common.sh@10 -- # set +x 00:22:51.810 13:33:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:51.810 13:33:09 -- nvmf/common.sh@470 -- # nvmfpid=76589 00:22:51.810 13:33:09 -- nvmf/common.sh@471 -- # waitforlisten 76589 00:22:51.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.810 13:33:09 -- common/autotest_common.sh@817 -- # '[' -z 76589 ']' 00:22:51.810 13:33:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.810 13:33:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:51.810 13:33:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.810 13:33:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:51.810 13:33:09 -- common/autotest_common.sh@10 -- # set +x 00:22:51.810 [2024-04-26 13:33:09.061543] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:51.811 [2024-04-26 13:33:09.061637] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.811 [2024-04-26 13:33:09.200545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.069 [2024-04-26 13:33:09.322875] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.069 [2024-04-26 13:33:09.322932] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.069 [2024-04-26 13:33:09.322944] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.069 [2024-04-26 13:33:09.322953] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.069 [2024-04-26 13:33:09.322961] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.069 [2024-04-26 13:33:09.323122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:52.069 [2024-04-26 13:33:09.323718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:52.069 [2024-04-26 13:33:09.323843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:52.069 [2024-04-26 13:33:09.323848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.021 13:33:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:53.021 13:33:10 -- common/autotest_common.sh@850 -- # return 0 00:22:53.021 13:33:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:53.021 13:33:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:53.021 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.021 13:33:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.021 13:33:10 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.021 13:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.021 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.021 [2024-04-26 13:33:10.146566] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.021 13:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.022 13:33:10 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.022 13:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.022 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.022 Malloc0 00:22:53.022 13:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.022 13:33:10 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.022 13:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.022 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.022 13:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.022 13:33:10 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.022 13:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.022 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.022 13:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.022 13:33:10 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.022 13:33:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.022 13:33:10 -- common/autotest_common.sh@10 -- # set +x 00:22:53.022 [2024-04-26 13:33:10.211192] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.022 13:33:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.022 13:33:10 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:53.022 13:33:10 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:53.022 13:33:10 -- nvmf/common.sh@521 -- # config=() 00:22:53.022 13:33:10 -- nvmf/common.sh@521 -- # local subsystem config 00:22:53.022 13:33:10 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:53.022 13:33:10 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:53.022 { 00:22:53.022 "params": { 00:22:53.022 "name": "Nvme$subsystem", 00:22:53.022 "trtype": "$TEST_TRANSPORT", 00:22:53.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.022 "adrfam": "ipv4", 00:22:53.022 "trsvcid": "$NVMF_PORT", 00:22:53.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.022 "hdgst": ${hdgst:-false}, 00:22:53.022 "ddgst": ${ddgst:-false} 00:22:53.022 }, 00:22:53.022 "method": "bdev_nvme_attach_controller" 00:22:53.022 } 00:22:53.022 EOF 00:22:53.022 )") 00:22:53.022 13:33:10 -- nvmf/common.sh@543 -- # cat 00:22:53.022 13:33:10 -- nvmf/common.sh@545 -- # jq . 00:22:53.022 13:33:10 -- nvmf/common.sh@546 -- # IFS=, 00:22:53.022 13:33:10 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:53.022 "params": { 00:22:53.022 "name": "Nvme1", 00:22:53.022 "trtype": "tcp", 00:22:53.022 "traddr": "10.0.0.2", 00:22:53.022 "adrfam": "ipv4", 00:22:53.022 "trsvcid": "4420", 00:22:53.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.022 "hdgst": false, 00:22:53.022 "ddgst": false 00:22:53.022 }, 00:22:53.022 "method": "bdev_nvme_attach_controller" 00:22:53.022 }' 00:22:53.022 [2024-04-26 13:33:10.263255] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:53.022 [2024-04-26 13:33:10.263341] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76643 ] 00:22:53.022 [2024-04-26 13:33:10.400432] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:53.281 [2024-04-26 13:33:10.513746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.281 [2024-04-26 13:33:10.513888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.281 [2024-04-26 13:33:10.513892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.281 I/O targets: 00:22:53.281 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:53.281 00:22:53.281 00:22:53.281 CUnit - A unit testing framework for C - Version 2.1-3 00:22:53.281 http://cunit.sourceforge.net/ 00:22:53.281 00:22:53.281 00:22:53.281 Suite: bdevio tests on: Nvme1n1 00:22:53.540 Test: blockdev write read block ...passed 00:22:53.540 Test: blockdev write zeroes read block ...passed 00:22:53.540 Test: blockdev write zeroes read no split ...passed 00:22:53.540 Test: blockdev write zeroes read split ...passed 00:22:53.540 Test: blockdev write zeroes read split partial ...passed 00:22:53.540 Test: blockdev reset ...[2024-04-26 13:33:10.803870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:53.540 [2024-04-26 13:33:10.804003] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17902f0 (9): Bad file descriptor 00:22:53.540 [2024-04-26 13:33:10.817661] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:53.540 passed 00:22:53.540 Test: blockdev write read 8 blocks ...passed 00:22:53.540 Test: blockdev write read size > 128k ...passed 00:22:53.540 Test: blockdev write read invalid size ...passed 00:22:53.540 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:53.540 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:53.540 Test: blockdev write read max offset ...passed 00:22:53.540 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:53.540 Test: blockdev writev readv 8 blocks ...passed 00:22:53.540 Test: blockdev writev readv 30 x 1block ...passed 00:22:53.540 Test: blockdev writev readv block ...passed 00:22:53.540 Test: blockdev writev readv size > 128k ...passed 00:22:53.540 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:53.798 Test: blockdev comparev and writev ...[2024-04-26 13:33:10.988615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.988682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:10.988703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.988715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:10.989265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.989290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:10.989308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.989319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:10.989710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.989741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:10.989759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.989770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:10.990346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.990378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:10.990396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:53.799 [2024-04-26 13:33:10.990406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:53.799 passed 00:22:53.799 Test: blockdev nvme passthru rw ...passed 00:22:53.799 Test: blockdev nvme passthru vendor specific ...[2024-04-26 13:33:11.073175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.799 [2024-04-26 13:33:11.073228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:11.073356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.799 [2024-04-26 13:33:11.073372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:11.073482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.799 [2024-04-26 13:33:11.073497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:53.799 [2024-04-26 13:33:11.073610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:53.799 [2024-04-26 13:33:11.073626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:53.799 passed 00:22:53.799 Test: blockdev nvme admin passthru ...passed 00:22:53.799 Test: blockdev copy ...passed 00:22:53.799 00:22:53.799 Run Summary: Type Total Ran Passed Failed Inactive 00:22:53.799 suites 1 1 n/a 0 0 00:22:53.799 tests 23 23 23 0 0 00:22:53.799 asserts 152 152 152 0 n/a 00:22:53.799 00:22:53.799 Elapsed time = 0.897 seconds 00:22:54.059 13:33:11 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.059 13:33:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.059 13:33:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.059 13:33:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.059 13:33:11 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:54.059 13:33:11 -- target/bdevio.sh@30 -- # nvmftestfini 00:22:54.059 13:33:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:54.059 13:33:11 -- nvmf/common.sh@117 -- # sync 00:22:54.059 13:33:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.059 13:33:11 -- nvmf/common.sh@120 -- # set +e 00:22:54.059 13:33:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.059 13:33:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.059 rmmod nvme_tcp 00:22:54.059 rmmod nvme_fabrics 00:22:54.059 rmmod nvme_keyring 00:22:54.059 13:33:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.059 13:33:11 -- nvmf/common.sh@124 -- # set -e 00:22:54.059 13:33:11 -- nvmf/common.sh@125 -- # return 0 00:22:54.059 13:33:11 -- nvmf/common.sh@478 -- # '[' -n 76589 ']' 00:22:54.059 13:33:11 -- nvmf/common.sh@479 -- # killprocess 76589 00:22:54.059 13:33:11 -- common/autotest_common.sh@936 -- # '[' -z 76589 ']' 00:22:54.059 13:33:11 -- common/autotest_common.sh@940 -- # kill -0 76589 00:22:54.059 13:33:11 -- common/autotest_common.sh@941 -- # uname 00:22:54.059 13:33:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.059 13:33:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76589 00:22:54.059 killing process with pid 76589 00:22:54.059 13:33:11 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:22:54.059 13:33:11 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:22:54.059 13:33:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76589' 00:22:54.059 13:33:11 -- common/autotest_common.sh@955 -- # kill 76589 00:22:54.059 13:33:11 -- common/autotest_common.sh@960 -- # wait 76589 00:22:54.627 13:33:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:54.627 13:33:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:54.627 13:33:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:54.627 13:33:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.627 13:33:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.627 13:33:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.627 13:33:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.627 13:33:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.627 13:33:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:54.627 00:22:54.627 real 0m3.302s 00:22:54.627 user 0m11.717s 00:22:54.627 sys 0m0.844s 00:22:54.627 13:33:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:54.627 ************************************ 00:22:54.627 END TEST nvmf_bdevio 00:22:54.627 ************************************ 00:22:54.627 13:33:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.627 13:33:11 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:22:54.627 13:33:11 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:54.627 13:33:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:22:54.627 13:33:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:54.627 13:33:11 -- common/autotest_common.sh@10 -- # set +x 00:22:54.627 ************************************ 00:22:54.627 START TEST nvmf_bdevio_no_huge 00:22:54.627 ************************************ 00:22:54.627 13:33:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:54.627 * Looking for test storage... 00:22:54.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:54.627 13:33:12 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.627 13:33:12 -- nvmf/common.sh@7 -- # uname -s 00:22:54.627 13:33:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.627 13:33:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.627 13:33:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.627 13:33:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.627 13:33:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.627 13:33:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.627 13:33:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.627 13:33:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.627 13:33:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.627 13:33:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.627 13:33:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:54.627 13:33:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:54.627 13:33:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.627 13:33:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.627 13:33:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.627 13:33:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.627 13:33:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.627 13:33:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.627 13:33:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.627 13:33:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.627 13:33:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.627 13:33:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.627 13:33:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.627 13:33:12 -- paths/export.sh@5 -- # export PATH 00:22:54.627 13:33:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.627 13:33:12 -- nvmf/common.sh@47 -- # : 0 00:22:54.627 13:33:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.627 13:33:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.627 13:33:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.627 13:33:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.627 13:33:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.627 13:33:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.627 13:33:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.627 13:33:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.627 13:33:12 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.627 13:33:12 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.627 13:33:12 -- target/bdevio.sh@14 -- # nvmftestinit 00:22:54.627 13:33:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:54.627 13:33:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.627 13:33:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:54.627 13:33:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:54.627 13:33:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:54.627 13:33:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.627 13:33:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.627 13:33:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.627 13:33:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:54.627 13:33:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:54.627 13:33:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:54.627 13:33:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:54.627 13:33:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:54.627 13:33:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:54.627 13:33:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.627 13:33:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.627 13:33:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:54.627 13:33:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:54.627 13:33:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.627 13:33:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.627 13:33:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.627 13:33:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.627 13:33:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.627 13:33:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.627 13:33:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.627 13:33:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.627 13:33:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:54.627 13:33:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:54.627 Cannot find device "nvmf_tgt_br" 00:22:54.627 13:33:12 -- nvmf/common.sh@155 -- # true 00:22:54.627 13:33:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.886 Cannot find device "nvmf_tgt_br2" 00:22:54.886 13:33:12 -- nvmf/common.sh@156 -- # true 00:22:54.886 13:33:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:54.886 13:33:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:54.886 Cannot find device "nvmf_tgt_br" 00:22:54.886 13:33:12 -- nvmf/common.sh@158 -- # true 00:22:54.886 13:33:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:54.886 Cannot find device "nvmf_tgt_br2" 00:22:54.886 13:33:12 -- nvmf/common.sh@159 -- # true 00:22:54.886 13:33:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:54.886 13:33:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:54.886 13:33:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.886 13:33:12 -- nvmf/common.sh@162 -- # true 00:22:54.886 13:33:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.886 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.886 13:33:12 -- nvmf/common.sh@163 -- # true 00:22:54.886 13:33:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.886 13:33:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.886 13:33:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.886 13:33:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.886 13:33:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.886 13:33:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.886 13:33:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.886 13:33:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:54.886 13:33:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:54.886 13:33:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:54.886 13:33:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:54.886 13:33:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:54.886 13:33:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:54.886 13:33:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.886 13:33:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.886 13:33:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.886 13:33:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:54.886 13:33:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:54.886 13:33:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:55.145 13:33:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:55.145 13:33:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:55.145 13:33:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:55.145 13:33:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:55.145 13:33:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:55.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:22:55.145 00:22:55.145 --- 10.0.0.2 ping statistics --- 00:22:55.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.145 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:55.145 13:33:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:55.145 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:55.145 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:22:55.145 00:22:55.145 --- 10.0.0.3 ping statistics --- 00:22:55.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.145 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:55.146 13:33:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:55.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:22:55.146 00:22:55.146 --- 10.0.0.1 ping statistics --- 00:22:55.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.146 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:55.146 13:33:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.146 13:33:12 -- nvmf/common.sh@422 -- # return 0 00:22:55.146 13:33:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:55.146 13:33:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.146 13:33:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:55.146 13:33:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:55.146 13:33:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.146 13:33:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:55.146 13:33:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:55.146 13:33:12 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:55.146 13:33:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:55.146 13:33:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:55.146 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:22:55.146 13:33:12 -- nvmf/common.sh@470 -- # nvmfpid=76829 00:22:55.146 13:33:12 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:55.146 13:33:12 -- nvmf/common.sh@471 -- # waitforlisten 76829 00:22:55.146 13:33:12 -- common/autotest_common.sh@817 -- # '[' -z 76829 ']' 00:22:55.146 13:33:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.146 13:33:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:55.146 13:33:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.146 13:33:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:55.146 13:33:12 -- common/autotest_common.sh@10 -- # set +x 00:22:55.146 [2024-04-26 13:33:12.486474] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:55.146 [2024-04-26 13:33:12.486589] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:55.405 [2024-04-26 13:33:12.643676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.405 [2024-04-26 13:33:12.770178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.405 [2024-04-26 13:33:12.770266] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.405 [2024-04-26 13:33:12.770297] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.405 [2024-04-26 13:33:12.770305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.405 [2024-04-26 13:33:12.770314] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.405 [2024-04-26 13:33:12.770504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:55.405 [2024-04-26 13:33:12.770977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:55.405 [2024-04-26 13:33:12.771145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.405 [2024-04-26 13:33:12.771145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:56.346 13:33:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:56.346 13:33:13 -- common/autotest_common.sh@850 -- # return 0 00:22:56.346 13:33:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:56.346 13:33:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:56.346 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.346 13:33:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.346 13:33:13 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.346 13:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.346 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.346 [2024-04-26 13:33:13.491754] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.346 13:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.346 13:33:13 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:56.346 13:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.346 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.346 Malloc0 00:22:56.346 13:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.346 13:33:13 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:56.346 13:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.346 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.346 13:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.346 13:33:13 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:56.346 13:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.346 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.346 13:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.346 13:33:13 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.346 13:33:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.346 13:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:56.346 [2024-04-26 13:33:13.532024] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.346 13:33:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.346 13:33:13 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:56.346 13:33:13 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:56.346 13:33:13 -- nvmf/common.sh@521 -- # config=() 00:22:56.346 13:33:13 -- nvmf/common.sh@521 -- # local subsystem config 00:22:56.346 13:33:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:56.346 13:33:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:56.346 { 00:22:56.346 "params": { 00:22:56.346 "name": "Nvme$subsystem", 00:22:56.346 "trtype": "$TEST_TRANSPORT", 00:22:56.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:56.346 "adrfam": "ipv4", 00:22:56.346 "trsvcid": "$NVMF_PORT", 00:22:56.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:56.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:56.346 "hdgst": ${hdgst:-false}, 00:22:56.346 "ddgst": ${ddgst:-false} 00:22:56.346 }, 00:22:56.346 "method": "bdev_nvme_attach_controller" 00:22:56.346 } 00:22:56.346 EOF 00:22:56.346 )") 00:22:56.346 13:33:13 -- nvmf/common.sh@543 -- # cat 00:22:56.346 13:33:13 -- nvmf/common.sh@545 -- # jq . 00:22:56.346 13:33:13 -- nvmf/common.sh@546 -- # IFS=, 00:22:56.346 13:33:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:56.346 "params": { 00:22:56.346 "name": "Nvme1", 00:22:56.346 "trtype": "tcp", 00:22:56.346 "traddr": "10.0.0.2", 00:22:56.346 "adrfam": "ipv4", 00:22:56.346 "trsvcid": "4420", 00:22:56.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.346 "hdgst": false, 00:22:56.346 "ddgst": false 00:22:56.346 }, 00:22:56.346 "method": "bdev_nvme_attach_controller" 00:22:56.346 }' 00:22:56.346 [2024-04-26 13:33:13.587869] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:56.346 [2024-04-26 13:33:13.587959] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76890 ] 00:22:56.346 [2024-04-26 13:33:13.726083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:56.605 [2024-04-26 13:33:13.866769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.605 [2024-04-26 13:33:13.866926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.605 [2024-04-26 13:33:13.866935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.862 I/O targets: 00:22:56.862 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:56.862 00:22:56.862 00:22:56.862 CUnit - A unit testing framework for C - Version 2.1-3 00:22:56.862 http://cunit.sourceforge.net/ 00:22:56.862 00:22:56.862 00:22:56.862 Suite: bdevio tests on: Nvme1n1 00:22:56.862 Test: blockdev write read block ...passed 00:22:56.862 Test: blockdev write zeroes read block ...passed 00:22:56.862 Test: blockdev write zeroes read no split ...passed 00:22:56.862 Test: blockdev write zeroes read split ...passed 00:22:56.862 Test: blockdev write zeroes read split partial ...passed 00:22:56.862 Test: blockdev reset ...[2024-04-26 13:33:14.213193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.862 [2024-04-26 13:33:14.213329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a3990 (9): Bad file descriptor 00:22:56.862 [2024-04-26 13:33:14.224648] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.862 passed 00:22:56.862 Test: blockdev write read 8 blocks ...passed 00:22:56.862 Test: blockdev write read size > 128k ...passed 00:22:56.862 Test: blockdev write read invalid size ...passed 00:22:56.863 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:56.863 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:56.863 Test: blockdev write read max offset ...passed 00:22:57.121 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:57.121 Test: blockdev writev readv 8 blocks ...passed 00:22:57.121 Test: blockdev writev readv 30 x 1block ...passed 00:22:57.121 Test: blockdev writev readv block ...passed 00:22:57.121 Test: blockdev writev readv size > 128k ...passed 00:22:57.121 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:57.121 Test: blockdev comparev and writev ...[2024-04-26 13:33:14.399655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.399717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.399743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.399759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.400134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.400170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.400193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.400206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.400576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.400609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.400638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.400651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.401072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.401105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.401128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:57.121 [2024-04-26 13:33:14.401141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:57.121 passed 00:22:57.121 Test: blockdev nvme passthru rw ...passed 00:22:57.121 Test: blockdev nvme passthru vendor specific ...[2024-04-26 13:33:14.484115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.121 [2024-04-26 13:33:14.484183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.484318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.121 [2024-04-26 13:33:14.484336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.484451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.121 [2024-04-26 13:33:14.484472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:57.121 [2024-04-26 13:33:14.484588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:57.121 [2024-04-26 13:33:14.484609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:57.121 passed 00:22:57.121 Test: blockdev nvme admin passthru ...passed 00:22:57.121 Test: blockdev copy ...passed 00:22:57.121 00:22:57.121 Run Summary: Type Total Ran Passed Failed Inactive 00:22:57.121 suites 1 1 n/a 0 0 00:22:57.121 tests 23 23 23 0 0 00:22:57.121 asserts 152 152 152 0 n/a 00:22:57.121 00:22:57.121 Elapsed time = 0.936 seconds 00:22:57.688 13:33:15 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.688 13:33:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.688 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:22:57.688 13:33:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.688 13:33:15 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:57.688 13:33:15 -- target/bdevio.sh@30 -- # nvmftestfini 00:22:57.688 13:33:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:57.688 13:33:15 -- nvmf/common.sh@117 -- # sync 00:22:57.688 13:33:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.688 13:33:15 -- nvmf/common.sh@120 -- # set +e 00:22:57.688 13:33:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.688 13:33:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.688 rmmod nvme_tcp 00:22:57.688 rmmod nvme_fabrics 00:22:57.688 rmmod nvme_keyring 00:22:57.945 13:33:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.945 13:33:15 -- nvmf/common.sh@124 -- # set -e 00:22:57.945 13:33:15 -- nvmf/common.sh@125 -- # return 0 00:22:57.945 13:33:15 -- nvmf/common.sh@478 -- # '[' -n 76829 ']' 00:22:57.945 13:33:15 -- nvmf/common.sh@479 -- # killprocess 76829 00:22:57.945 13:33:15 -- common/autotest_common.sh@936 -- # '[' -z 76829 ']' 00:22:57.945 13:33:15 -- common/autotest_common.sh@940 -- # kill -0 76829 00:22:57.945 13:33:15 -- common/autotest_common.sh@941 -- # uname 00:22:57.945 13:33:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:57.945 13:33:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76829 00:22:57.945 13:33:15 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:22:57.945 13:33:15 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:22:57.945 killing process with pid 76829 00:22:57.945 13:33:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76829' 00:22:57.945 13:33:15 -- common/autotest_common.sh@955 -- # kill 76829 00:22:57.945 13:33:15 -- common/autotest_common.sh@960 -- # wait 76829 00:22:58.512 13:33:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:58.512 13:33:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:58.512 13:33:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:58.512 13:33:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:58.512 13:33:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:58.512 13:33:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.513 13:33:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.513 13:33:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.513 13:33:15 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:58.513 00:22:58.513 real 0m3.808s 00:22:58.513 user 0m13.635s 00:22:58.513 sys 0m1.426s 00:22:58.513 13:33:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:58.513 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:22:58.513 ************************************ 00:22:58.513 END TEST nvmf_bdevio_no_huge 00:22:58.513 ************************************ 00:22:58.513 13:33:15 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:58.513 13:33:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:58.513 13:33:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:58.513 13:33:15 -- common/autotest_common.sh@10 -- # set +x 00:22:58.513 ************************************ 00:22:58.513 START TEST nvmf_tls 00:22:58.513 ************************************ 00:22:58.513 13:33:15 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:58.513 * Looking for test storage... 00:22:58.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:58.513 13:33:15 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:58.513 13:33:15 -- nvmf/common.sh@7 -- # uname -s 00:22:58.513 13:33:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.513 13:33:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.513 13:33:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.513 13:33:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.513 13:33:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.513 13:33:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.513 13:33:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.513 13:33:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.513 13:33:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.513 13:33:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.513 13:33:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:58.513 13:33:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:22:58.513 13:33:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.513 13:33:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.513 13:33:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:58.513 13:33:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.513 13:33:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:58.513 13:33:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.513 13:33:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.513 13:33:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.513 13:33:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.513 13:33:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.513 13:33:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.513 13:33:15 -- paths/export.sh@5 -- # export PATH 00:22:58.513 13:33:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.513 13:33:15 -- nvmf/common.sh@47 -- # : 0 00:22:58.513 13:33:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.513 13:33:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.513 13:33:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.513 13:33:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.513 13:33:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.513 13:33:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.513 13:33:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.513 13:33:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.513 13:33:15 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.513 13:33:15 -- target/tls.sh@62 -- # nvmftestinit 00:22:58.513 13:33:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:58.513 13:33:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.513 13:33:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:58.513 13:33:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:58.513 13:33:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:58.513 13:33:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.513 13:33:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.513 13:33:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.513 13:33:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:58.513 13:33:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:58.513 13:33:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:58.513 13:33:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:58.513 13:33:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:58.513 13:33:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:58.513 13:33:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.513 13:33:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.513 13:33:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:58.513 13:33:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:58.513 13:33:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:58.513 13:33:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:58.513 13:33:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:58.513 13:33:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.513 13:33:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:58.513 13:33:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:58.513 13:33:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:58.513 13:33:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:58.513 13:33:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:58.772 13:33:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:58.772 Cannot find device "nvmf_tgt_br" 00:22:58.772 13:33:15 -- nvmf/common.sh@155 -- # true 00:22:58.772 13:33:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:58.772 Cannot find device "nvmf_tgt_br2" 00:22:58.772 13:33:16 -- nvmf/common.sh@156 -- # true 00:22:58.772 13:33:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:58.772 13:33:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:58.772 Cannot find device "nvmf_tgt_br" 00:22:58.772 13:33:16 -- nvmf/common.sh@158 -- # true 00:22:58.772 13:33:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:58.772 Cannot find device "nvmf_tgt_br2" 00:22:58.772 13:33:16 -- nvmf/common.sh@159 -- # true 00:22:58.772 13:33:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:58.772 13:33:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:58.772 13:33:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:58.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.772 13:33:16 -- nvmf/common.sh@162 -- # true 00:22:58.772 13:33:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:58.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.772 13:33:16 -- nvmf/common.sh@163 -- # true 00:22:58.772 13:33:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:58.772 13:33:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:58.772 13:33:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:58.772 13:33:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:58.772 13:33:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:58.772 13:33:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:58.772 13:33:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:58.772 13:33:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:58.772 13:33:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:58.772 13:33:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:58.772 13:33:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:58.772 13:33:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:58.772 13:33:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:58.772 13:33:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:59.031 13:33:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:59.031 13:33:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:59.031 13:33:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:59.031 13:33:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:59.031 13:33:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:59.031 13:33:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:59.031 13:33:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:59.031 13:33:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:59.031 13:33:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:59.031 13:33:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:59.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:22:59.031 00:22:59.031 --- 10.0.0.2 ping statistics --- 00:22:59.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.031 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:22:59.031 13:33:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:59.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:59.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:22:59.031 00:22:59.031 --- 10.0.0.3 ping statistics --- 00:22:59.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.031 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:59.031 13:33:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:59.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:22:59.031 00:22:59.031 --- 10.0.0.1 ping statistics --- 00:22:59.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.031 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:59.031 13:33:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.031 13:33:16 -- nvmf/common.sh@422 -- # return 0 00:22:59.031 13:33:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:59.031 13:33:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.031 13:33:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:59.031 13:33:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:59.031 13:33:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.031 13:33:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:59.031 13:33:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:59.031 13:33:16 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:59.031 13:33:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:59.031 13:33:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:59.031 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:22:59.031 13:33:16 -- nvmf/common.sh@470 -- # nvmfpid=77083 00:22:59.031 13:33:16 -- nvmf/common.sh@471 -- # waitforlisten 77083 00:22:59.031 13:33:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:59.031 13:33:16 -- common/autotest_common.sh@817 -- # '[' -z 77083 ']' 00:22:59.031 13:33:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.032 13:33:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:59.032 13:33:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.032 13:33:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:59.032 13:33:16 -- common/autotest_common.sh@10 -- # set +x 00:22:59.032 [2024-04-26 13:33:16.408562] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:22:59.032 [2024-04-26 13:33:16.408968] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.290 [2024-04-26 13:33:16.554898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.291 [2024-04-26 13:33:16.690366] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.291 [2024-04-26 13:33:16.690437] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.291 [2024-04-26 13:33:16.690453] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.291 [2024-04-26 13:33:16.690464] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.291 [2024-04-26 13:33:16.690474] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.291 [2024-04-26 13:33:16.690519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.224 13:33:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:00.224 13:33:17 -- common/autotest_common.sh@850 -- # return 0 00:23:00.224 13:33:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:00.224 13:33:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:00.224 13:33:17 -- common/autotest_common.sh@10 -- # set +x 00:23:00.224 13:33:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.224 13:33:17 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:23:00.224 13:33:17 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:00.483 true 00:23:00.483 13:33:17 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.483 13:33:17 -- target/tls.sh@73 -- # jq -r .tls_version 00:23:00.741 13:33:18 -- target/tls.sh@73 -- # version=0 00:23:00.741 13:33:18 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:23:00.741 13:33:18 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:00.999 13:33:18 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:00.999 13:33:18 -- target/tls.sh@81 -- # jq -r .tls_version 00:23:01.258 13:33:18 -- target/tls.sh@81 -- # version=13 00:23:01.258 13:33:18 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:23:01.258 13:33:18 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:01.517 13:33:18 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:01.517 13:33:18 -- target/tls.sh@89 -- # jq -r .tls_version 00:23:01.775 13:33:19 -- target/tls.sh@89 -- # version=7 00:23:01.775 13:33:19 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:23:01.775 13:33:19 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:23:01.775 13:33:19 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.034 13:33:19 -- target/tls.sh@96 -- # ktls=false 00:23:02.034 13:33:19 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:23:02.034 13:33:19 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:02.292 13:33:19 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:23:02.292 13:33:19 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.550 13:33:19 -- target/tls.sh@104 -- # ktls=true 00:23:02.550 13:33:19 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:23:02.550 13:33:19 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:02.808 13:33:20 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:02.808 13:33:20 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:23:03.067 13:33:20 -- target/tls.sh@112 -- # ktls=false 00:23:03.067 13:33:20 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:23:03.067 13:33:20 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:03.067 13:33:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:03.067 13:33:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:03.067 13:33:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:03.067 13:33:20 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:23:03.067 13:33:20 -- nvmf/common.sh@693 -- # digest=1 00:23:03.067 13:33:20 -- nvmf/common.sh@694 -- # python - 00:23:03.067 13:33:20 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:03.067 13:33:20 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:03.067 13:33:20 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:03.067 13:33:20 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:03.067 13:33:20 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:03.067 13:33:20 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:23:03.067 13:33:20 -- nvmf/common.sh@693 -- # digest=1 00:23:03.067 13:33:20 -- nvmf/common.sh@694 -- # python - 00:23:03.359 13:33:20 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:03.359 13:33:20 -- target/tls.sh@121 -- # mktemp 00:23:03.359 13:33:20 -- target/tls.sh@121 -- # key_path=/tmp/tmp.GIq8NQC27v 00:23:03.359 13:33:20 -- target/tls.sh@122 -- # mktemp 00:23:03.359 13:33:20 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.QgdfYHMiHH 00:23:03.359 13:33:20 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:03.359 13:33:20 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:03.359 13:33:20 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.GIq8NQC27v 00:23:03.359 13:33:20 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.QgdfYHMiHH 00:23:03.359 13:33:20 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:03.623 13:33:20 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:23:03.882 13:33:21 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.GIq8NQC27v 00:23:03.882 13:33:21 -- target/tls.sh@49 -- # local key=/tmp/tmp.GIq8NQC27v 00:23:03.882 13:33:21 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.147 [2024-04-26 13:33:21.565459] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.147 13:33:21 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:04.410 13:33:21 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:04.669 [2024-04-26 13:33:22.049564] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:04.669 [2024-04-26 13:33:22.049837] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.669 13:33:22 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:04.928 malloc0 00:23:05.188 13:33:22 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:05.446 13:33:22 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GIq8NQC27v 00:23:05.446 [2024-04-26 13:33:22.877510] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:05.710 13:33:22 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GIq8NQC27v 00:23:15.705 Initializing NVMe Controllers 00:23:15.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:15.705 Initialization complete. Launching workers. 00:23:15.705 ======================================================== 00:23:15.705 Latency(us) 00:23:15.705 Device Information : IOPS MiB/s Average min max 00:23:15.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9133.31 35.68 7008.93 1543.52 8503.05 00:23:15.705 ======================================================== 00:23:15.705 Total : 9133.31 35.68 7008.93 1543.52 8503.05 00:23:15.705 00:23:15.705 13:33:33 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GIq8NQC27v 00:23:15.705 13:33:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.705 13:33:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.705 13:33:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.705 13:33:33 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GIq8NQC27v' 00:23:15.705 13:33:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.705 13:33:33 -- target/tls.sh@28 -- # bdevperf_pid=77454 00:23:15.705 13:33:33 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.705 13:33:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.705 13:33:33 -- target/tls.sh@31 -- # waitforlisten 77454 /var/tmp/bdevperf.sock 00:23:15.705 13:33:33 -- common/autotest_common.sh@817 -- # '[' -z 77454 ']' 00:23:15.705 13:33:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.705 13:33:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:15.705 13:33:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.705 13:33:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:15.705 13:33:33 -- common/autotest_common.sh@10 -- # set +x 00:23:15.964 [2024-04-26 13:33:33.161879] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:15.964 [2024-04-26 13:33:33.161994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77454 ] 00:23:15.964 [2024-04-26 13:33:33.300927] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.222 [2024-04-26 13:33:33.413888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.786 13:33:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:16.787 13:33:34 -- common/autotest_common.sh@850 -- # return 0 00:23:16.787 13:33:34 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GIq8NQC27v 00:23:17.046 [2024-04-26 13:33:34.322443] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.046 [2024-04-26 13:33:34.322564] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:17.046 TLSTESTn1 00:23:17.046 13:33:34 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:17.305 Running I/O for 10 seconds... 00:23:27.279 00:23:27.279 Latency(us) 00:23:27.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.279 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:27.279 Verification LBA range: start 0x0 length 0x2000 00:23:27.279 TLSTESTn1 : 10.03 3661.41 14.30 0.00 0.00 34886.99 7119.59 24188.74 00:23:27.279 =================================================================================================================== 00:23:27.279 Total : 3661.41 14.30 0.00 0.00 34886.99 7119.59 24188.74 00:23:27.279 0 00:23:27.279 13:33:44 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:27.279 13:33:44 -- target/tls.sh@45 -- # killprocess 77454 00:23:27.279 13:33:44 -- common/autotest_common.sh@936 -- # '[' -z 77454 ']' 00:23:27.279 13:33:44 -- common/autotest_common.sh@940 -- # kill -0 77454 00:23:27.279 13:33:44 -- common/autotest_common.sh@941 -- # uname 00:23:27.279 13:33:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:27.279 13:33:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77454 00:23:27.279 13:33:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:27.279 13:33:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:27.279 killing process with pid 77454 00:23:27.279 13:33:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77454' 00:23:27.279 13:33:44 -- common/autotest_common.sh@955 -- # kill 77454 00:23:27.279 Received shutdown signal, test time was about 10.000000 seconds 00:23:27.279 00:23:27.279 Latency(us) 00:23:27.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.279 =================================================================================================================== 00:23:27.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.279 [2024-04-26 13:33:44.594208] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:27.279 13:33:44 -- common/autotest_common.sh@960 -- # wait 77454 00:23:27.537 13:33:44 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QgdfYHMiHH 00:23:27.537 13:33:44 -- common/autotest_common.sh@638 -- # local es=0 00:23:27.537 13:33:44 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QgdfYHMiHH 00:23:27.537 13:33:44 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:27.538 13:33:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:27.538 13:33:44 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:27.538 13:33:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:27.538 13:33:44 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QgdfYHMiHH 00:23:27.538 13:33:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:27.538 13:33:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:27.538 13:33:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:27.538 13:33:44 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.QgdfYHMiHH' 00:23:27.538 13:33:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.538 13:33:44 -- target/tls.sh@28 -- # bdevperf_pid=77600 00:23:27.538 13:33:44 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:27.538 13:33:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:27.538 13:33:44 -- target/tls.sh@31 -- # waitforlisten 77600 /var/tmp/bdevperf.sock 00:23:27.538 13:33:44 -- common/autotest_common.sh@817 -- # '[' -z 77600 ']' 00:23:27.538 13:33:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.538 13:33:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:27.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.538 13:33:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.538 13:33:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:27.538 13:33:44 -- common/autotest_common.sh@10 -- # set +x 00:23:27.538 [2024-04-26 13:33:44.906738] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:27.538 [2024-04-26 13:33:44.906877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77600 ] 00:23:27.815 [2024-04-26 13:33:45.045901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.815 [2024-04-26 13:33:45.157309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.751 13:33:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:28.751 13:33:45 -- common/autotest_common.sh@850 -- # return 0 00:23:28.751 13:33:45 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QgdfYHMiHH 00:23:28.751 [2024-04-26 13:33:46.128272] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.751 [2024-04-26 13:33:46.128383] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.751 [2024-04-26 13:33:46.139950] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:28.751 [2024-04-26 13:33:46.140080] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15419c0 (107): Transport endpoint is not connected 00:23:28.751 [2024-04-26 13:33:46.141070] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15419c0 (9): Bad file descriptor 00:23:28.751 [2024-04-26 13:33:46.142068] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.751 [2024-04-26 13:33:46.142091] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:28.751 [2024-04-26 13:33:46.142105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.751 2024/04/26 13:33:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.QgdfYHMiHH subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:23:28.751 request: 00:23:28.751 { 00:23:28.751 "method": "bdev_nvme_attach_controller", 00:23:28.751 "params": { 00:23:28.751 "name": "TLSTEST", 00:23:28.751 "trtype": "tcp", 00:23:28.751 "traddr": "10.0.0.2", 00:23:28.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.751 "adrfam": "ipv4", 00:23:28.751 "trsvcid": "4420", 00:23:28.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.751 "psk": "/tmp/tmp.QgdfYHMiHH" 00:23:28.751 } 00:23:28.751 } 00:23:28.751 Got JSON-RPC error response 00:23:28.751 GoRPCClient: error on JSON-RPC call 00:23:28.751 13:33:46 -- target/tls.sh@36 -- # killprocess 77600 00:23:28.751 13:33:46 -- common/autotest_common.sh@936 -- # '[' -z 77600 ']' 00:23:28.751 13:33:46 -- common/autotest_common.sh@940 -- # kill -0 77600 00:23:28.751 13:33:46 -- common/autotest_common.sh@941 -- # uname 00:23:28.751 13:33:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:28.751 13:33:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77600 00:23:28.751 13:33:46 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:28.751 13:33:46 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:28.751 13:33:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77600' 00:23:28.751 killing process with pid 77600 00:23:28.751 13:33:46 -- common/autotest_common.sh@955 -- # kill 77600 00:23:28.751 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.751 00:23:28.751 Latency(us) 00:23:28.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.751 =================================================================================================================== 00:23:28.751 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.751 [2024-04-26 13:33:46.199058] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:28.751 13:33:46 -- common/autotest_common.sh@960 -- # wait 77600 00:23:29.011 13:33:46 -- target/tls.sh@37 -- # return 1 00:23:29.011 13:33:46 -- common/autotest_common.sh@641 -- # es=1 00:23:29.011 13:33:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:29.011 13:33:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:29.011 13:33:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:29.011 13:33:46 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GIq8NQC27v 00:23:29.011 13:33:46 -- common/autotest_common.sh@638 -- # local es=0 00:23:29.011 13:33:46 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GIq8NQC27v 00:23:29.011 13:33:46 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:29.011 13:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:29.011 13:33:46 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:29.011 13:33:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:29.011 13:33:46 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GIq8NQC27v 00:23:29.011 13:33:46 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:29.011 13:33:46 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:29.011 13:33:46 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:29.011 13:33:46 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GIq8NQC27v' 00:23:29.011 13:33:46 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.011 13:33:46 -- target/tls.sh@28 -- # bdevperf_pid=77646 00:23:29.011 13:33:46 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:29.011 13:33:46 -- target/tls.sh@31 -- # waitforlisten 77646 /var/tmp/bdevperf.sock 00:23:29.011 13:33:46 -- common/autotest_common.sh@817 -- # '[' -z 77646 ']' 00:23:29.011 13:33:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.011 13:33:46 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.011 13:33:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:29.011 13:33:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.011 13:33:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:29.011 13:33:46 -- common/autotest_common.sh@10 -- # set +x 00:23:29.269 [2024-04-26 13:33:46.509770] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:29.269 [2024-04-26 13:33:46.509931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77646 ] 00:23:29.269 [2024-04-26 13:33:46.646927] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.528 [2024-04-26 13:33:46.758416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.095 13:33:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:30.095 13:33:47 -- common/autotest_common.sh@850 -- # return 0 00:23:30.095 13:33:47 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.GIq8NQC27v 00:23:30.355 [2024-04-26 13:33:47.713250] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.355 [2024-04-26 13:33:47.713375] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:30.355 [2024-04-26 13:33:47.720017] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:30.355 [2024-04-26 13:33:47.720063] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:30.355 [2024-04-26 13:33:47.720126] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:30.355 [2024-04-26 13:33:47.720150] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d9c0 (107): Transport endpoint is not connected 00:23:30.355 [2024-04-26 13:33:47.721141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x183d9c0 (9): Bad file descriptor 00:23:30.355 [2024-04-26 13:33:47.722137] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:30.355 [2024-04-26 13:33:47.722162] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:30.355 [2024-04-26 13:33:47.722177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.355 2024/04/26 13:33:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.GIq8NQC27v subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:23:30.355 request: 00:23:30.355 { 00:23:30.355 "method": "bdev_nvme_attach_controller", 00:23:30.355 "params": { 00:23:30.355 "name": "TLSTEST", 00:23:30.355 "trtype": "tcp", 00:23:30.355 "traddr": "10.0.0.2", 00:23:30.355 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:30.355 "adrfam": "ipv4", 00:23:30.355 "trsvcid": "4420", 00:23:30.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.355 "psk": "/tmp/tmp.GIq8NQC27v" 00:23:30.355 } 00:23:30.355 } 00:23:30.355 Got JSON-RPC error response 00:23:30.355 GoRPCClient: error on JSON-RPC call 00:23:30.355 13:33:47 -- target/tls.sh@36 -- # killprocess 77646 00:23:30.355 13:33:47 -- common/autotest_common.sh@936 -- # '[' -z 77646 ']' 00:23:30.355 13:33:47 -- common/autotest_common.sh@940 -- # kill -0 77646 00:23:30.355 13:33:47 -- common/autotest_common.sh@941 -- # uname 00:23:30.355 13:33:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:30.355 13:33:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77646 00:23:30.355 13:33:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:30.355 13:33:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:30.355 killing process with pid 77646 00:23:30.355 13:33:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77646' 00:23:30.355 13:33:47 -- common/autotest_common.sh@955 -- # kill 77646 00:23:30.355 Received shutdown signal, test time was about 10.000000 seconds 00:23:30.355 00:23:30.355 Latency(us) 00:23:30.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.355 =================================================================================================================== 00:23:30.355 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:30.355 [2024-04-26 13:33:47.775608] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:30.355 13:33:47 -- common/autotest_common.sh@960 -- # wait 77646 00:23:30.614 13:33:48 -- target/tls.sh@37 -- # return 1 00:23:30.614 13:33:48 -- common/autotest_common.sh@641 -- # es=1 00:23:30.614 13:33:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:30.614 13:33:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:30.614 13:33:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:30.614 13:33:48 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GIq8NQC27v 00:23:30.614 13:33:48 -- common/autotest_common.sh@638 -- # local es=0 00:23:30.614 13:33:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GIq8NQC27v 00:23:30.614 13:33:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:30.614 13:33:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:30.614 13:33:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:30.614 13:33:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:30.614 13:33:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GIq8NQC27v 00:23:30.614 13:33:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:30.614 13:33:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:30.614 13:33:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:30.614 13:33:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GIq8NQC27v' 00:23:30.614 13:33:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:30.614 13:33:48 -- target/tls.sh@28 -- # bdevperf_pid=77691 00:23:30.614 13:33:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:30.614 13:33:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.614 13:33:48 -- target/tls.sh@31 -- # waitforlisten 77691 /var/tmp/bdevperf.sock 00:23:30.614 13:33:48 -- common/autotest_common.sh@817 -- # '[' -z 77691 ']' 00:23:30.614 13:33:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.614 13:33:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:30.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.614 13:33:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.614 13:33:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:30.614 13:33:48 -- common/autotest_common.sh@10 -- # set +x 00:23:30.873 [2024-04-26 13:33:48.086564] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:30.873 [2024-04-26 13:33:48.086690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77691 ] 00:23:30.873 [2024-04-26 13:33:48.221322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.131 [2024-04-26 13:33:48.341901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.699 13:33:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:31.699 13:33:49 -- common/autotest_common.sh@850 -- # return 0 00:23:31.699 13:33:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GIq8NQC27v 00:23:31.959 [2024-04-26 13:33:49.312991] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:31.959 [2024-04-26 13:33:49.313126] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:31.959 [2024-04-26 13:33:49.320238] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:31.959 [2024-04-26 13:33:49.320294] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:31.959 [2024-04-26 13:33:49.320365] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:31.959 [2024-04-26 13:33:49.321067] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb599c0 (107): Transport endpoint is not connected 00:23:31.959 [2024-04-26 13:33:49.322057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb599c0 (9): Bad file descriptor 00:23:31.959 [2024-04-26 13:33:49.323053] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:31.959 [2024-04-26 13:33:49.323079] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:31.959 [2024-04-26 13:33:49.323094] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:31.959 2024/04/26 13:33:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.GIq8NQC27v subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:23:31.959 request: 00:23:31.959 { 00:23:31.959 "method": "bdev_nvme_attach_controller", 00:23:31.959 "params": { 00:23:31.959 "name": "TLSTEST", 00:23:31.959 "trtype": "tcp", 00:23:31.959 "traddr": "10.0.0.2", 00:23:31.959 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.959 "adrfam": "ipv4", 00:23:31.959 "trsvcid": "4420", 00:23:31.959 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.959 "psk": "/tmp/tmp.GIq8NQC27v" 00:23:31.959 } 00:23:31.959 } 00:23:31.959 Got JSON-RPC error response 00:23:31.959 GoRPCClient: error on JSON-RPC call 00:23:31.959 13:33:49 -- target/tls.sh@36 -- # killprocess 77691 00:23:31.959 13:33:49 -- common/autotest_common.sh@936 -- # '[' -z 77691 ']' 00:23:31.959 13:33:49 -- common/autotest_common.sh@940 -- # kill -0 77691 00:23:31.959 13:33:49 -- common/autotest_common.sh@941 -- # uname 00:23:31.959 13:33:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.959 13:33:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77691 00:23:31.959 13:33:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:31.959 killing process with pid 77691 00:23:31.959 13:33:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:31.959 13:33:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77691' 00:23:31.959 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.959 00:23:31.959 Latency(us) 00:23:31.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.959 =================================================================================================================== 00:23:31.959 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:31.959 13:33:49 -- common/autotest_common.sh@955 -- # kill 77691 00:23:31.959 [2024-04-26 13:33:49.367947] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:31.959 13:33:49 -- common/autotest_common.sh@960 -- # wait 77691 00:23:32.219 13:33:49 -- target/tls.sh@37 -- # return 1 00:23:32.219 13:33:49 -- common/autotest_common.sh@641 -- # es=1 00:23:32.219 13:33:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:32.219 13:33:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:32.219 13:33:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:32.220 13:33:49 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:32.220 13:33:49 -- common/autotest_common.sh@638 -- # local es=0 00:23:32.220 13:33:49 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:32.220 13:33:49 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:32.220 13:33:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:32.220 13:33:49 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:32.220 13:33:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:32.220 13:33:49 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:32.220 13:33:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:32.220 13:33:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:32.220 13:33:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:32.220 13:33:49 -- target/tls.sh@23 -- # psk= 00:23:32.220 13:33:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:32.220 13:33:49 -- target/tls.sh@28 -- # bdevperf_pid=77737 00:23:32.220 13:33:49 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:32.220 13:33:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.220 13:33:49 -- target/tls.sh@31 -- # waitforlisten 77737 /var/tmp/bdevperf.sock 00:23:32.220 13:33:49 -- common/autotest_common.sh@817 -- # '[' -z 77737 ']' 00:23:32.220 13:33:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.220 13:33:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:32.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.220 13:33:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.220 13:33:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:32.220 13:33:49 -- common/autotest_common.sh@10 -- # set +x 00:23:32.481 [2024-04-26 13:33:49.679025] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:32.482 [2024-04-26 13:33:49.679134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77737 ] 00:23:32.482 [2024-04-26 13:33:49.815036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.482 [2024-04-26 13:33:49.926433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.420 13:33:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:33.420 13:33:50 -- common/autotest_common.sh@850 -- # return 0 00:23:33.420 13:33:50 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:33.679 [2024-04-26 13:33:50.921547] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:33.679 [2024-04-26 13:33:50.923481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd79f0 (9): Bad file descriptor 00:23:33.679 [2024-04-26 13:33:50.924475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:33.679 [2024-04-26 13:33:50.924514] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:33.679 [2024-04-26 13:33:50.924546] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.679 2024/04/26 13:33:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:23:33.679 request: 00:23:33.679 { 00:23:33.679 "method": "bdev_nvme_attach_controller", 00:23:33.679 "params": { 00:23:33.679 "name": "TLSTEST", 00:23:33.679 "trtype": "tcp", 00:23:33.679 "traddr": "10.0.0.2", 00:23:33.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.679 "adrfam": "ipv4", 00:23:33.679 "trsvcid": "4420", 00:23:33.679 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:23:33.679 } 00:23:33.679 } 00:23:33.679 Got JSON-RPC error response 00:23:33.679 GoRPCClient: error on JSON-RPC call 00:23:33.679 13:33:50 -- target/tls.sh@36 -- # killprocess 77737 00:23:33.679 13:33:50 -- common/autotest_common.sh@936 -- # '[' -z 77737 ']' 00:23:33.679 13:33:50 -- common/autotest_common.sh@940 -- # kill -0 77737 00:23:33.679 13:33:50 -- common/autotest_common.sh@941 -- # uname 00:23:33.679 13:33:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.679 13:33:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77737 00:23:33.679 13:33:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:33.679 killing process with pid 77737 00:23:33.679 13:33:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:33.679 13:33:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77737' 00:23:33.679 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.679 00:23:33.679 Latency(us) 00:23:33.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.679 =================================================================================================================== 00:23:33.679 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:33.679 13:33:50 -- common/autotest_common.sh@955 -- # kill 77737 00:23:33.679 13:33:50 -- common/autotest_common.sh@960 -- # wait 77737 00:23:33.938 13:33:51 -- target/tls.sh@37 -- # return 1 00:23:33.938 13:33:51 -- common/autotest_common.sh@641 -- # es=1 00:23:33.938 13:33:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:33.938 13:33:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:33.938 13:33:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:33.938 13:33:51 -- target/tls.sh@158 -- # killprocess 77083 00:23:33.938 13:33:51 -- common/autotest_common.sh@936 -- # '[' -z 77083 ']' 00:23:33.938 13:33:51 -- common/autotest_common.sh@940 -- # kill -0 77083 00:23:33.938 13:33:51 -- common/autotest_common.sh@941 -- # uname 00:23:33.938 13:33:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:33.938 13:33:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77083 00:23:33.938 13:33:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:33.938 killing process with pid 77083 00:23:33.938 13:33:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:33.938 13:33:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77083' 00:23:33.938 13:33:51 -- common/autotest_common.sh@955 -- # kill 77083 00:23:33.938 [2024-04-26 13:33:51.253181] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:33.938 13:33:51 -- common/autotest_common.sh@960 -- # wait 77083 00:23:34.219 13:33:51 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:34.219 13:33:51 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:34.219 13:33:51 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:34.219 13:33:51 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:23:34.219 13:33:51 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:34.219 13:33:51 -- nvmf/common.sh@693 -- # digest=2 00:23:34.219 13:33:51 -- nvmf/common.sh@694 -- # python - 00:23:34.219 13:33:51 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:34.219 13:33:51 -- target/tls.sh@160 -- # mktemp 00:23:34.219 13:33:51 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.F9heb1DsHD 00:23:34.219 13:33:51 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:34.219 13:33:51 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.F9heb1DsHD 00:23:34.219 13:33:51 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:34.219 13:33:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:34.219 13:33:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:34.219 13:33:51 -- common/autotest_common.sh@10 -- # set +x 00:23:34.219 13:33:51 -- nvmf/common.sh@470 -- # nvmfpid=77798 00:23:34.219 13:33:51 -- nvmf/common.sh@471 -- # waitforlisten 77798 00:23:34.219 13:33:51 -- common/autotest_common.sh@817 -- # '[' -z 77798 ']' 00:23:34.219 13:33:51 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:34.219 13:33:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.219 13:33:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:34.219 13:33:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.219 13:33:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:34.219 13:33:51 -- common/autotest_common.sh@10 -- # set +x 00:23:34.219 [2024-04-26 13:33:51.649445] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:34.219 [2024-04-26 13:33:51.649574] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.477 [2024-04-26 13:33:51.791218] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.477 [2024-04-26 13:33:51.893793] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.477 [2024-04-26 13:33:51.893901] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.477 [2024-04-26 13:33:51.893915] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.477 [2024-04-26 13:33:51.893924] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.477 [2024-04-26 13:33:51.893932] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.477 [2024-04-26 13:33:51.893972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.413 13:33:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:35.413 13:33:52 -- common/autotest_common.sh@850 -- # return 0 00:23:35.413 13:33:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:35.413 13:33:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:35.413 13:33:52 -- common/autotest_common.sh@10 -- # set +x 00:23:35.413 13:33:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.414 13:33:52 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.F9heb1DsHD 00:23:35.414 13:33:52 -- target/tls.sh@49 -- # local key=/tmp/tmp.F9heb1DsHD 00:23:35.414 13:33:52 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:35.672 [2024-04-26 13:33:52.953323] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.672 13:33:52 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:35.931 13:33:53 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:36.189 [2024-04-26 13:33:53.489459] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:36.189 [2024-04-26 13:33:53.489853] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.189 13:33:53 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:36.447 malloc0 00:23:36.447 13:33:53 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:36.763 13:33:53 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F9heb1DsHD 00:23:37.023 [2024-04-26 13:33:54.213848] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:37.023 13:33:54 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9heb1DsHD 00:23:37.023 13:33:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:37.023 13:33:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:37.023 13:33:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:37.023 13:33:54 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.F9heb1DsHD' 00:23:37.023 13:33:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:37.023 13:33:54 -- target/tls.sh@28 -- # bdevperf_pid=77895 00:23:37.023 13:33:54 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:37.023 13:33:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:37.023 13:33:54 -- target/tls.sh@31 -- # waitforlisten 77895 /var/tmp/bdevperf.sock 00:23:37.023 13:33:54 -- common/autotest_common.sh@817 -- # '[' -z 77895 ']' 00:23:37.023 13:33:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.023 13:33:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:37.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.023 13:33:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.023 13:33:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:37.023 13:33:54 -- common/autotest_common.sh@10 -- # set +x 00:23:37.023 [2024-04-26 13:33:54.288901] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:37.023 [2024-04-26 13:33:54.289004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77895 ] 00:23:37.023 [2024-04-26 13:33:54.427436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.282 [2024-04-26 13:33:54.555085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.849 13:33:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:37.849 13:33:55 -- common/autotest_common.sh@850 -- # return 0 00:23:37.849 13:33:55 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F9heb1DsHD 00:23:38.107 [2024-04-26 13:33:55.525624] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.107 [2024-04-26 13:33:55.525754] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:38.366 TLSTESTn1 00:23:38.366 13:33:55 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:38.366 Running I/O for 10 seconds... 00:23:48.411 00:23:48.411 Latency(us) 00:23:48.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.411 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:48.411 Verification LBA range: start 0x0 length 0x2000 00:23:48.411 TLSTESTn1 : 10.02 3936.98 15.38 0.00 0.00 32449.27 7030.23 26333.56 00:23:48.411 =================================================================================================================== 00:23:48.411 Total : 3936.98 15.38 0.00 0.00 32449.27 7030.23 26333.56 00:23:48.411 0 00:23:48.411 13:34:05 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.411 13:34:05 -- target/tls.sh@45 -- # killprocess 77895 00:23:48.411 13:34:05 -- common/autotest_common.sh@936 -- # '[' -z 77895 ']' 00:23:48.411 13:34:05 -- common/autotest_common.sh@940 -- # kill -0 77895 00:23:48.411 13:34:05 -- common/autotest_common.sh@941 -- # uname 00:23:48.411 13:34:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:48.411 13:34:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77895 00:23:48.411 13:34:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:48.411 13:34:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:48.411 killing process with pid 77895 00:23:48.411 13:34:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77895' 00:23:48.411 13:34:05 -- common/autotest_common.sh@955 -- # kill 77895 00:23:48.411 Received shutdown signal, test time was about 10.000000 seconds 00:23:48.411 00:23:48.411 Latency(us) 00:23:48.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.411 =================================================================================================================== 00:23:48.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.411 [2024-04-26 13:34:05.787337] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:48.411 13:34:05 -- common/autotest_common.sh@960 -- # wait 77895 00:23:48.669 13:34:06 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.F9heb1DsHD 00:23:48.670 13:34:06 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9heb1DsHD 00:23:48.670 13:34:06 -- common/autotest_common.sh@638 -- # local es=0 00:23:48.670 13:34:06 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9heb1DsHD 00:23:48.670 13:34:06 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:23:48.670 13:34:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:48.670 13:34:06 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:23:48.670 13:34:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:48.670 13:34:06 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.F9heb1DsHD 00:23:48.670 13:34:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:48.670 13:34:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:48.670 13:34:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:48.670 13:34:06 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.F9heb1DsHD' 00:23:48.670 13:34:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:48.670 13:34:06 -- target/tls.sh@28 -- # bdevperf_pid=78053 00:23:48.670 13:34:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:48.670 13:34:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:48.670 13:34:06 -- target/tls.sh@31 -- # waitforlisten 78053 /var/tmp/bdevperf.sock 00:23:48.670 13:34:06 -- common/autotest_common.sh@817 -- # '[' -z 78053 ']' 00:23:48.670 13:34:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.670 13:34:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:48.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.670 13:34:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.670 13:34:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:48.670 13:34:06 -- common/autotest_common.sh@10 -- # set +x 00:23:48.670 [2024-04-26 13:34:06.116829] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:48.670 [2024-04-26 13:34:06.116943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78053 ] 00:23:48.928 [2024-04-26 13:34:06.259224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.928 [2024-04-26 13:34:06.374343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.865 13:34:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:49.865 13:34:07 -- common/autotest_common.sh@850 -- # return 0 00:23:49.865 13:34:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F9heb1DsHD 00:23:50.125 [2024-04-26 13:34:07.394509] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:50.125 [2024-04-26 13:34:07.394596] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:50.125 [2024-04-26 13:34:07.394609] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.F9heb1DsHD 00:23:50.125 2024/04/26 13:34:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.F9heb1DsHD subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:23:50.125 request: 00:23:50.125 { 00:23:50.125 "method": "bdev_nvme_attach_controller", 00:23:50.125 "params": { 00:23:50.125 "name": "TLSTEST", 00:23:50.125 "trtype": "tcp", 00:23:50.125 "traddr": "10.0.0.2", 00:23:50.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.125 "adrfam": "ipv4", 00:23:50.125 "trsvcid": "4420", 00:23:50.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.125 "psk": "/tmp/tmp.F9heb1DsHD" 00:23:50.125 } 00:23:50.125 } 00:23:50.125 Got JSON-RPC error response 00:23:50.125 GoRPCClient: error on JSON-RPC call 00:23:50.125 13:34:07 -- target/tls.sh@36 -- # killprocess 78053 00:23:50.125 13:34:07 -- common/autotest_common.sh@936 -- # '[' -z 78053 ']' 00:23:50.125 13:34:07 -- common/autotest_common.sh@940 -- # kill -0 78053 00:23:50.125 13:34:07 -- common/autotest_common.sh@941 -- # uname 00:23:50.125 13:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:50.125 13:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78053 00:23:50.125 killing process with pid 78053 00:23:50.125 Received shutdown signal, test time was about 10.000000 seconds 00:23:50.125 00:23:50.125 Latency(us) 00:23:50.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.125 =================================================================================================================== 00:23:50.125 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:50.125 13:34:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:50.125 13:34:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:50.125 13:34:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78053' 00:23:50.125 13:34:07 -- common/autotest_common.sh@955 -- # kill 78053 00:23:50.125 13:34:07 -- common/autotest_common.sh@960 -- # wait 78053 00:23:50.385 13:34:07 -- target/tls.sh@37 -- # return 1 00:23:50.385 13:34:07 -- common/autotest_common.sh@641 -- # es=1 00:23:50.385 13:34:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:50.385 13:34:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:50.385 13:34:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:50.385 13:34:07 -- target/tls.sh@174 -- # killprocess 77798 00:23:50.385 13:34:07 -- common/autotest_common.sh@936 -- # '[' -z 77798 ']' 00:23:50.385 13:34:07 -- common/autotest_common.sh@940 -- # kill -0 77798 00:23:50.385 13:34:07 -- common/autotest_common.sh@941 -- # uname 00:23:50.385 13:34:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:50.385 13:34:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77798 00:23:50.385 killing process with pid 77798 00:23:50.385 13:34:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:50.385 13:34:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:50.385 13:34:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77798' 00:23:50.385 13:34:07 -- common/autotest_common.sh@955 -- # kill 77798 00:23:50.385 [2024-04-26 13:34:07.721317] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:50.385 13:34:07 -- common/autotest_common.sh@960 -- # wait 77798 00:23:50.644 13:34:07 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:50.644 13:34:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:50.644 13:34:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:50.644 13:34:07 -- common/autotest_common.sh@10 -- # set +x 00:23:50.644 13:34:08 -- nvmf/common.sh@470 -- # nvmfpid=78104 00:23:50.644 13:34:08 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.644 13:34:08 -- nvmf/common.sh@471 -- # waitforlisten 78104 00:23:50.644 13:34:08 -- common/autotest_common.sh@817 -- # '[' -z 78104 ']' 00:23:50.644 13:34:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.644 13:34:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:50.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.644 13:34:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.644 13:34:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:50.644 13:34:08 -- common/autotest_common.sh@10 -- # set +x 00:23:50.644 [2024-04-26 13:34:08.066447] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:50.644 [2024-04-26 13:34:08.066569] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.902 [2024-04-26 13:34:08.207893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.902 [2024-04-26 13:34:08.318628] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.902 [2024-04-26 13:34:08.318714] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.902 [2024-04-26 13:34:08.318727] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.902 [2024-04-26 13:34:08.318736] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.902 [2024-04-26 13:34:08.318745] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.902 [2024-04-26 13:34:08.318798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.837 13:34:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:51.837 13:34:09 -- common/autotest_common.sh@850 -- # return 0 00:23:51.837 13:34:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:51.837 13:34:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:51.837 13:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:51.837 13:34:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.837 13:34:09 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.F9heb1DsHD 00:23:51.837 13:34:09 -- common/autotest_common.sh@638 -- # local es=0 00:23:51.837 13:34:09 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.F9heb1DsHD 00:23:51.837 13:34:09 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:23:51.837 13:34:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:51.837 13:34:09 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:23:51.837 13:34:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:51.837 13:34:09 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.F9heb1DsHD 00:23:51.837 13:34:09 -- target/tls.sh@49 -- # local key=/tmp/tmp.F9heb1DsHD 00:23:51.837 13:34:09 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:52.096 [2024-04-26 13:34:09.359223] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.096 13:34:09 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:52.355 13:34:09 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.613 [2024-04-26 13:34:09.883362] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.613 [2024-04-26 13:34:09.883633] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.613 13:34:09 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.872 malloc0 00:23:52.872 13:34:10 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:53.130 13:34:10 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F9heb1DsHD 00:23:53.389 [2024-04-26 13:34:10.739807] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:53.389 [2024-04-26 13:34:10.739862] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:53.389 [2024-04-26 13:34:10.739890] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:23:53.389 2024/04/26 13:34:10 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.F9heb1DsHD], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:23:53.389 request: 00:23:53.389 { 00:23:53.389 "method": "nvmf_subsystem_add_host", 00:23:53.389 "params": { 00:23:53.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.389 "host": "nqn.2016-06.io.spdk:host1", 00:23:53.389 "psk": "/tmp/tmp.F9heb1DsHD" 00:23:53.389 } 00:23:53.389 } 00:23:53.389 Got JSON-RPC error response 00:23:53.389 GoRPCClient: error on JSON-RPC call 00:23:53.389 13:34:10 -- common/autotest_common.sh@641 -- # es=1 00:23:53.389 13:34:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:53.389 13:34:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:53.389 13:34:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:53.389 13:34:10 -- target/tls.sh@180 -- # killprocess 78104 00:23:53.389 13:34:10 -- common/autotest_common.sh@936 -- # '[' -z 78104 ']' 00:23:53.389 13:34:10 -- common/autotest_common.sh@940 -- # kill -0 78104 00:23:53.389 13:34:10 -- common/autotest_common.sh@941 -- # uname 00:23:53.389 13:34:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:53.389 13:34:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78104 00:23:53.389 13:34:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:53.389 13:34:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:53.389 killing process with pid 78104 00:23:53.389 13:34:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78104' 00:23:53.389 13:34:10 -- common/autotest_common.sh@955 -- # kill 78104 00:23:53.389 13:34:10 -- common/autotest_common.sh@960 -- # wait 78104 00:23:53.648 13:34:11 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.F9heb1DsHD 00:23:53.648 13:34:11 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:53.648 13:34:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:53.648 13:34:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:53.648 13:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.648 13:34:11 -- nvmf/common.sh@470 -- # nvmfpid=78220 00:23:53.648 13:34:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.648 13:34:11 -- nvmf/common.sh@471 -- # waitforlisten 78220 00:23:53.648 13:34:11 -- common/autotest_common.sh@817 -- # '[' -z 78220 ']' 00:23:53.648 13:34:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.648 13:34:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:53.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.648 13:34:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.648 13:34:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:53.648 13:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:53.907 [2024-04-26 13:34:11.115237] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:53.907 [2024-04-26 13:34:11.115333] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.907 [2024-04-26 13:34:11.252569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.166 [2024-04-26 13:34:11.373528] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.166 [2024-04-26 13:34:11.373610] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.166 [2024-04-26 13:34:11.373633] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.166 [2024-04-26 13:34:11.373644] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.166 [2024-04-26 13:34:11.373653] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.166 [2024-04-26 13:34:11.373691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.732 13:34:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:54.732 13:34:12 -- common/autotest_common.sh@850 -- # return 0 00:23:54.732 13:34:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:54.732 13:34:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:54.732 13:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:54.732 13:34:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.732 13:34:12 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.F9heb1DsHD 00:23:54.732 13:34:12 -- target/tls.sh@49 -- # local key=/tmp/tmp.F9heb1DsHD 00:23:54.732 13:34:12 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:54.991 [2024-04-26 13:34:12.408719] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.991 13:34:12 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:55.249 13:34:12 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:55.508 [2024-04-26 13:34:12.904802] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:55.508 [2024-04-26 13:34:12.905071] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.508 13:34:12 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:55.767 malloc0 00:23:56.023 13:34:13 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:56.281 13:34:13 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F9heb1DsHD 00:23:56.564 [2024-04-26 13:34:13.744914] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:56.564 13:34:13 -- target/tls.sh@188 -- # bdevperf_pid=78324 00:23:56.564 13:34:13 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:56.564 13:34:13 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.564 13:34:13 -- target/tls.sh@191 -- # waitforlisten 78324 /var/tmp/bdevperf.sock 00:23:56.564 13:34:13 -- common/autotest_common.sh@817 -- # '[' -z 78324 ']' 00:23:56.564 13:34:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:56.564 13:34:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:56.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:56.564 13:34:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:56.564 13:34:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:56.564 13:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:56.564 [2024-04-26 13:34:13.823759] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:56.564 [2024-04-26 13:34:13.823878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78324 ] 00:23:56.564 [2024-04-26 13:34:13.956593] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.854 [2024-04-26 13:34:14.088069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.422 13:34:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:57.422 13:34:14 -- common/autotest_common.sh@850 -- # return 0 00:23:57.422 13:34:14 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F9heb1DsHD 00:23:57.681 [2024-04-26 13:34:15.101892] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:57.681 [2024-04-26 13:34:15.102018] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:57.939 TLSTESTn1 00:23:57.939 13:34:15 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:23:58.199 13:34:15 -- target/tls.sh@196 -- # tgtconf='{ 00:23:58.199 "subsystems": [ 00:23:58.199 { 00:23:58.199 "subsystem": "keyring", 00:23:58.199 "config": [] 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "subsystem": "iobuf", 00:23:58.199 "config": [ 00:23:58.199 { 00:23:58.199 "method": "iobuf_set_options", 00:23:58.199 "params": { 00:23:58.199 "large_bufsize": 135168, 00:23:58.199 "large_pool_count": 1024, 00:23:58.199 "small_bufsize": 8192, 00:23:58.199 "small_pool_count": 8192 00:23:58.199 } 00:23:58.199 } 00:23:58.199 ] 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "subsystem": "sock", 00:23:58.199 "config": [ 00:23:58.199 { 00:23:58.199 "method": "sock_impl_set_options", 00:23:58.199 "params": { 00:23:58.199 "enable_ktls": false, 00:23:58.199 "enable_placement_id": 0, 00:23:58.199 "enable_quickack": false, 00:23:58.199 "enable_recv_pipe": true, 00:23:58.199 "enable_zerocopy_send_client": false, 00:23:58.199 "enable_zerocopy_send_server": true, 00:23:58.199 "impl_name": "posix", 00:23:58.199 "recv_buf_size": 2097152, 00:23:58.199 "send_buf_size": 2097152, 00:23:58.199 "tls_version": 0, 00:23:58.199 "zerocopy_threshold": 0 00:23:58.199 } 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "method": "sock_impl_set_options", 00:23:58.199 "params": { 00:23:58.199 "enable_ktls": false, 00:23:58.199 "enable_placement_id": 0, 00:23:58.199 "enable_quickack": false, 00:23:58.199 "enable_recv_pipe": true, 00:23:58.199 "enable_zerocopy_send_client": false, 00:23:58.199 "enable_zerocopy_send_server": true, 00:23:58.199 "impl_name": "ssl", 00:23:58.199 "recv_buf_size": 4096, 00:23:58.199 "send_buf_size": 4096, 00:23:58.199 "tls_version": 0, 00:23:58.199 "zerocopy_threshold": 0 00:23:58.199 } 00:23:58.199 } 00:23:58.199 ] 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "subsystem": "vmd", 00:23:58.199 "config": [] 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "subsystem": "accel", 00:23:58.199 "config": [ 00:23:58.199 { 00:23:58.199 "method": "accel_set_options", 00:23:58.199 "params": { 00:23:58.199 "buf_count": 2048, 00:23:58.199 "large_cache_size": 16, 00:23:58.199 "sequence_count": 2048, 00:23:58.199 "small_cache_size": 128, 00:23:58.199 "task_count": 2048 00:23:58.199 } 00:23:58.199 } 00:23:58.199 ] 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "subsystem": "bdev", 00:23:58.199 "config": [ 00:23:58.199 { 00:23:58.199 "method": "bdev_set_options", 00:23:58.199 "params": { 00:23:58.199 "bdev_auto_examine": true, 00:23:58.199 "bdev_io_cache_size": 256, 00:23:58.199 "bdev_io_pool_size": 65535, 00:23:58.199 "iobuf_large_cache_size": 16, 00:23:58.199 "iobuf_small_cache_size": 128 00:23:58.199 } 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "method": "bdev_raid_set_options", 00:23:58.199 "params": { 00:23:58.199 "process_window_size_kb": 1024 00:23:58.199 } 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "method": "bdev_iscsi_set_options", 00:23:58.199 "params": { 00:23:58.199 "timeout_sec": 30 00:23:58.199 } 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "method": "bdev_nvme_set_options", 00:23:58.199 "params": { 00:23:58.199 "action_on_timeout": "none", 00:23:58.199 "allow_accel_sequence": false, 00:23:58.199 "arbitration_burst": 0, 00:23:58.199 "bdev_retry_count": 3, 00:23:58.199 "ctrlr_loss_timeout_sec": 0, 00:23:58.199 "delay_cmd_submit": true, 00:23:58.199 "dhchap_dhgroups": [ 00:23:58.199 "null", 00:23:58.199 "ffdhe2048", 00:23:58.199 "ffdhe3072", 00:23:58.199 "ffdhe4096", 00:23:58.199 "ffdhe6144", 00:23:58.199 "ffdhe8192" 00:23:58.199 ], 00:23:58.199 "dhchap_digests": [ 00:23:58.199 "sha256", 00:23:58.199 "sha384", 00:23:58.199 "sha512" 00:23:58.199 ], 00:23:58.199 "disable_auto_failback": false, 00:23:58.199 "fast_io_fail_timeout_sec": 0, 00:23:58.199 "generate_uuids": false, 00:23:58.199 "high_priority_weight": 0, 00:23:58.199 "io_path_stat": false, 00:23:58.199 "io_queue_requests": 0, 00:23:58.199 "keep_alive_timeout_ms": 10000, 00:23:58.199 "low_priority_weight": 0, 00:23:58.199 "medium_priority_weight": 0, 00:23:58.199 "nvme_adminq_poll_period_us": 10000, 00:23:58.199 "nvme_error_stat": false, 00:23:58.199 "nvme_ioq_poll_period_us": 0, 00:23:58.199 "rdma_cm_event_timeout_ms": 0, 00:23:58.199 "rdma_max_cq_size": 0, 00:23:58.199 "rdma_srq_size": 0, 00:23:58.199 "reconnect_delay_sec": 0, 00:23:58.199 "timeout_admin_us": 0, 00:23:58.199 "timeout_us": 0, 00:23:58.199 "transport_ack_timeout": 0, 00:23:58.199 "transport_retry_count": 4, 00:23:58.199 "transport_tos": 0 00:23:58.199 } 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "method": "bdev_nvme_set_hotplug", 00:23:58.199 "params": { 00:23:58.199 "enable": false, 00:23:58.199 "period_us": 100000 00:23:58.199 } 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "method": "bdev_malloc_create", 00:23:58.199 "params": { 00:23:58.199 "block_size": 4096, 00:23:58.199 "name": "malloc0", 00:23:58.199 "num_blocks": 8192, 00:23:58.199 "optimal_io_boundary": 0, 00:23:58.199 "physical_block_size": 4096, 00:23:58.199 "uuid": "f1fe242d-86cc-44cc-ac53-6cbf3a92b34b" 00:23:58.199 } 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "method": "bdev_wait_for_examine" 00:23:58.199 } 00:23:58.199 ] 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "subsystem": "nbd", 00:23:58.199 "config": [] 00:23:58.199 }, 00:23:58.199 { 00:23:58.199 "subsystem": "scheduler", 00:23:58.199 "config": [ 00:23:58.199 { 00:23:58.199 "method": "framework_set_scheduler", 00:23:58.199 "params": { 00:23:58.199 "name": "static" 00:23:58.199 } 00:23:58.199 } 00:23:58.199 ] 00:23:58.199 }, 00:23:58.199 { 00:23:58.200 "subsystem": "nvmf", 00:23:58.200 "config": [ 00:23:58.200 { 00:23:58.200 "method": "nvmf_set_config", 00:23:58.200 "params": { 00:23:58.200 "admin_cmd_passthru": { 00:23:58.200 "identify_ctrlr": false 00:23:58.200 }, 00:23:58.200 "discovery_filter": "match_any" 00:23:58.200 } 00:23:58.200 }, 00:23:58.200 { 00:23:58.200 "method": "nvmf_set_max_subsystems", 00:23:58.200 "params": { 00:23:58.200 "max_subsystems": 1024 00:23:58.200 } 00:23:58.200 }, 00:23:58.200 { 00:23:58.200 "method": "nvmf_set_crdt", 00:23:58.200 "params": { 00:23:58.200 "crdt1": 0, 00:23:58.200 "crdt2": 0, 00:23:58.200 "crdt3": 0 00:23:58.200 } 00:23:58.200 }, 00:23:58.200 { 00:23:58.200 "method": "nvmf_create_transport", 00:23:58.200 "params": { 00:23:58.200 "abort_timeout_sec": 1, 00:23:58.200 "ack_timeout": 0, 00:23:58.200 "buf_cache_size": 4294967295, 00:23:58.200 "c2h_success": false, 00:23:58.200 "data_wr_pool_size": 0, 00:23:58.200 "dif_insert_or_strip": false, 00:23:58.200 "in_capsule_data_size": 4096, 00:23:58.200 "io_unit_size": 131072, 00:23:58.200 "max_aq_depth": 128, 00:23:58.200 "max_io_qpairs_per_ctrlr": 127, 00:23:58.200 "max_io_size": 131072, 00:23:58.200 "max_queue_depth": 128, 00:23:58.200 "num_shared_buffers": 511, 00:23:58.200 "sock_priority": 0, 00:23:58.200 "trtype": "TCP", 00:23:58.200 "zcopy": false 00:23:58.200 } 00:23:58.200 }, 00:23:58.200 { 00:23:58.200 "method": "nvmf_create_subsystem", 00:23:58.200 "params": { 00:23:58.200 "allow_any_host": false, 00:23:58.200 "ana_reporting": false, 00:23:58.200 "max_cntlid": 65519, 00:23:58.200 "max_namespaces": 10, 00:23:58.200 "min_cntlid": 1, 00:23:58.200 "model_number": "SPDK bdev Controller", 00:23:58.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.200 "serial_number": "SPDK00000000000001" 00:23:58.200 } 00:23:58.200 }, 00:23:58.200 { 00:23:58.200 "method": "nvmf_subsystem_add_host", 00:23:58.200 "params": { 00:23:58.200 "host": "nqn.2016-06.io.spdk:host1", 00:23:58.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.200 "psk": "/tmp/tmp.F9heb1DsHD" 00:23:58.200 } 00:23:58.200 }, 00:23:58.200 { 00:23:58.200 "method": "nvmf_subsystem_add_ns", 00:23:58.200 "params": { 00:23:58.200 "namespace": { 00:23:58.200 "bdev_name": "malloc0", 00:23:58.200 "nguid": "F1FE242D86CC44CCAC536CBF3A92B34B", 00:23:58.200 "no_auto_visible": false, 00:23:58.200 "nsid": 1, 00:23:58.200 "uuid": "f1fe242d-86cc-44cc-ac53-6cbf3a92b34b" 00:23:58.200 }, 00:23:58.200 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:58.200 } 00:23:58.200 }, 00:23:58.200 { 00:23:58.200 "method": "nvmf_subsystem_add_listener", 00:23:58.200 "params": { 00:23:58.200 "listen_address": { 00:23:58.200 "adrfam": "IPv4", 00:23:58.200 "traddr": "10.0.0.2", 00:23:58.200 "trsvcid": "4420", 00:23:58.200 "trtype": "TCP" 00:23:58.200 }, 00:23:58.200 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.200 "secure_channel": true 00:23:58.200 } 00:23:58.200 } 00:23:58.200 ] 00:23:58.200 } 00:23:58.200 ] 00:23:58.200 }' 00:23:58.200 13:34:15 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:58.459 13:34:15 -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:58.459 "subsystems": [ 00:23:58.459 { 00:23:58.459 "subsystem": "keyring", 00:23:58.459 "config": [] 00:23:58.459 }, 00:23:58.459 { 00:23:58.459 "subsystem": "iobuf", 00:23:58.459 "config": [ 00:23:58.459 { 00:23:58.459 "method": "iobuf_set_options", 00:23:58.459 "params": { 00:23:58.459 "large_bufsize": 135168, 00:23:58.459 "large_pool_count": 1024, 00:23:58.459 "small_bufsize": 8192, 00:23:58.459 "small_pool_count": 8192 00:23:58.459 } 00:23:58.459 } 00:23:58.459 ] 00:23:58.459 }, 00:23:58.459 { 00:23:58.459 "subsystem": "sock", 00:23:58.459 "config": [ 00:23:58.459 { 00:23:58.459 "method": "sock_impl_set_options", 00:23:58.459 "params": { 00:23:58.460 "enable_ktls": false, 00:23:58.460 "enable_placement_id": 0, 00:23:58.460 "enable_quickack": false, 00:23:58.460 "enable_recv_pipe": true, 00:23:58.460 "enable_zerocopy_send_client": false, 00:23:58.460 "enable_zerocopy_send_server": true, 00:23:58.460 "impl_name": "posix", 00:23:58.460 "recv_buf_size": 2097152, 00:23:58.460 "send_buf_size": 2097152, 00:23:58.460 "tls_version": 0, 00:23:58.460 "zerocopy_threshold": 0 00:23:58.460 } 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "method": "sock_impl_set_options", 00:23:58.460 "params": { 00:23:58.460 "enable_ktls": false, 00:23:58.460 "enable_placement_id": 0, 00:23:58.460 "enable_quickack": false, 00:23:58.460 "enable_recv_pipe": true, 00:23:58.460 "enable_zerocopy_send_client": false, 00:23:58.460 "enable_zerocopy_send_server": true, 00:23:58.460 "impl_name": "ssl", 00:23:58.460 "recv_buf_size": 4096, 00:23:58.460 "send_buf_size": 4096, 00:23:58.460 "tls_version": 0, 00:23:58.460 "zerocopy_threshold": 0 00:23:58.460 } 00:23:58.460 } 00:23:58.460 ] 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "subsystem": "vmd", 00:23:58.460 "config": [] 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "subsystem": "accel", 00:23:58.460 "config": [ 00:23:58.460 { 00:23:58.460 "method": "accel_set_options", 00:23:58.460 "params": { 00:23:58.460 "buf_count": 2048, 00:23:58.460 "large_cache_size": 16, 00:23:58.460 "sequence_count": 2048, 00:23:58.460 "small_cache_size": 128, 00:23:58.460 "task_count": 2048 00:23:58.460 } 00:23:58.460 } 00:23:58.460 ] 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "subsystem": "bdev", 00:23:58.460 "config": [ 00:23:58.460 { 00:23:58.460 "method": "bdev_set_options", 00:23:58.460 "params": { 00:23:58.460 "bdev_auto_examine": true, 00:23:58.460 "bdev_io_cache_size": 256, 00:23:58.460 "bdev_io_pool_size": 65535, 00:23:58.460 "iobuf_large_cache_size": 16, 00:23:58.460 "iobuf_small_cache_size": 128 00:23:58.460 } 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "method": "bdev_raid_set_options", 00:23:58.460 "params": { 00:23:58.460 "process_window_size_kb": 1024 00:23:58.460 } 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "method": "bdev_iscsi_set_options", 00:23:58.460 "params": { 00:23:58.460 "timeout_sec": 30 00:23:58.460 } 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "method": "bdev_nvme_set_options", 00:23:58.460 "params": { 00:23:58.460 "action_on_timeout": "none", 00:23:58.460 "allow_accel_sequence": false, 00:23:58.460 "arbitration_burst": 0, 00:23:58.460 "bdev_retry_count": 3, 00:23:58.460 "ctrlr_loss_timeout_sec": 0, 00:23:58.460 "delay_cmd_submit": true, 00:23:58.460 "dhchap_dhgroups": [ 00:23:58.460 "null", 00:23:58.460 "ffdhe2048", 00:23:58.460 "ffdhe3072", 00:23:58.460 "ffdhe4096", 00:23:58.460 "ffdhe6144", 00:23:58.460 "ffdhe8192" 00:23:58.460 ], 00:23:58.460 "dhchap_digests": [ 00:23:58.460 "sha256", 00:23:58.460 "sha384", 00:23:58.460 "sha512" 00:23:58.460 ], 00:23:58.460 "disable_auto_failback": false, 00:23:58.460 "fast_io_fail_timeout_sec": 0, 00:23:58.460 "generate_uuids": false, 00:23:58.460 "high_priority_weight": 0, 00:23:58.460 "io_path_stat": false, 00:23:58.460 "io_queue_requests": 512, 00:23:58.460 "keep_alive_timeout_ms": 10000, 00:23:58.460 "low_priority_weight": 0, 00:23:58.460 "medium_priority_weight": 0, 00:23:58.460 "nvme_adminq_poll_period_us": 10000, 00:23:58.460 "nvme_error_stat": false, 00:23:58.460 "nvme_ioq_poll_period_us": 0, 00:23:58.460 "rdma_cm_event_timeout_ms": 0, 00:23:58.460 "rdma_max_cq_size": 0, 00:23:58.460 "rdma_srq_size": 0, 00:23:58.460 "reconnect_delay_sec": 0, 00:23:58.460 "timeout_admin_us": 0, 00:23:58.460 "timeout_us": 0, 00:23:58.460 "transport_ack_timeout": 0, 00:23:58.460 "transport_retry_count": 4, 00:23:58.460 "transport_tos": 0 00:23:58.460 } 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "method": "bdev_nvme_attach_controller", 00:23:58.460 "params": { 00:23:58.460 "adrfam": "IPv4", 00:23:58.460 "ctrlr_loss_timeout_sec": 0, 00:23:58.460 "ddgst": false, 00:23:58.460 "fast_io_fail_timeout_sec": 0, 00:23:58.460 "hdgst": false, 00:23:58.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.460 "name": "TLSTEST", 00:23:58.460 "prchk_guard": false, 00:23:58.460 "prchk_reftag": false, 00:23:58.460 "psk": "/tmp/tmp.F9heb1DsHD", 00:23:58.460 "reconnect_delay_sec": 0, 00:23:58.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.460 "traddr": "10.0.0.2", 00:23:58.460 "trsvcid": "4420", 00:23:58.460 "trtype": "TCP" 00:23:58.460 } 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "method": "bdev_nvme_set_hotplug", 00:23:58.460 "params": { 00:23:58.460 "enable": false, 00:23:58.460 "period_us": 100000 00:23:58.460 } 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "method": "bdev_wait_for_examine" 00:23:58.460 } 00:23:58.460 ] 00:23:58.460 }, 00:23:58.460 { 00:23:58.460 "subsystem": "nbd", 00:23:58.460 "config": [] 00:23:58.460 } 00:23:58.460 ] 00:23:58.460 }' 00:23:58.460 13:34:15 -- target/tls.sh@199 -- # killprocess 78324 00:23:58.460 13:34:15 -- common/autotest_common.sh@936 -- # '[' -z 78324 ']' 00:23:58.460 13:34:15 -- common/autotest_common.sh@940 -- # kill -0 78324 00:23:58.719 13:34:15 -- common/autotest_common.sh@941 -- # uname 00:23:58.719 13:34:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:58.719 13:34:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78324 00:23:58.719 13:34:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:58.719 killing process with pid 78324 00:23:58.719 13:34:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:58.719 13:34:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78324' 00:23:58.719 13:34:15 -- common/autotest_common.sh@955 -- # kill 78324 00:23:58.719 Received shutdown signal, test time was about 10.000000 seconds 00:23:58.719 00:23:58.719 Latency(us) 00:23:58.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.719 =================================================================================================================== 00:23:58.719 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:58.719 [2024-04-26 13:34:15.935915] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:58.719 13:34:15 -- common/autotest_common.sh@960 -- # wait 78324 00:23:58.977 13:34:16 -- target/tls.sh@200 -- # killprocess 78220 00:23:58.977 13:34:16 -- common/autotest_common.sh@936 -- # '[' -z 78220 ']' 00:23:58.977 13:34:16 -- common/autotest_common.sh@940 -- # kill -0 78220 00:23:58.977 13:34:16 -- common/autotest_common.sh@941 -- # uname 00:23:58.977 13:34:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:58.977 13:34:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78220 00:23:58.977 13:34:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:58.977 13:34:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:58.977 13:34:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78220' 00:23:58.977 killing process with pid 78220 00:23:58.977 13:34:16 -- common/autotest_common.sh@955 -- # kill 78220 00:23:58.977 [2024-04-26 13:34:16.219052] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:58.977 13:34:16 -- common/autotest_common.sh@960 -- # wait 78220 00:23:59.236 13:34:16 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:59.236 13:34:16 -- target/tls.sh@203 -- # echo '{ 00:23:59.236 "subsystems": [ 00:23:59.236 { 00:23:59.236 "subsystem": "keyring", 00:23:59.236 "config": [] 00:23:59.236 }, 00:23:59.236 { 00:23:59.236 "subsystem": "iobuf", 00:23:59.236 "config": [ 00:23:59.236 { 00:23:59.236 "method": "iobuf_set_options", 00:23:59.236 "params": { 00:23:59.236 "large_bufsize": 135168, 00:23:59.237 "large_pool_count": 1024, 00:23:59.237 "small_bufsize": 8192, 00:23:59.237 "small_pool_count": 8192 00:23:59.237 } 00:23:59.237 } 00:23:59.237 ] 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "subsystem": "sock", 00:23:59.237 "config": [ 00:23:59.237 { 00:23:59.237 "method": "sock_impl_set_options", 00:23:59.237 "params": { 00:23:59.237 "enable_ktls": false, 00:23:59.237 "enable_placement_id": 0, 00:23:59.237 "enable_quickack": false, 00:23:59.237 "enable_recv_pipe": true, 00:23:59.237 "enable_zerocopy_send_client": false, 00:23:59.237 "enable_zerocopy_send_server": true, 00:23:59.237 "impl_name": "posix", 00:23:59.237 "recv_buf_size": 2097152, 00:23:59.237 "send_buf_size": 2097152, 00:23:59.237 "tls_version": 0, 00:23:59.237 "zerocopy_threshold": 0 00:23:59.237 } 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "method": "sock_impl_set_options", 00:23:59.237 "params": { 00:23:59.237 "enable_ktls": false, 00:23:59.237 "enable_placement_id": 0, 00:23:59.237 "enable_quickack": false, 00:23:59.237 "enable_recv_pipe": true, 00:23:59.237 "enable_zerocopy_send_client": false, 00:23:59.237 "enable_zerocopy_send_server": true, 00:23:59.237 "impl_name": "ssl", 00:23:59.237 "recv_buf_size": 4096, 00:23:59.237 "send_buf_size": 4096, 00:23:59.237 "tls_version": 0, 00:23:59.237 "zerocopy_threshold": 0 00:23:59.237 } 00:23:59.237 } 00:23:59.237 ] 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "subsystem": "vmd", 00:23:59.237 "config": [] 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "subsystem": "accel", 00:23:59.237 "config": [ 00:23:59.237 { 00:23:59.237 "method": "accel_set_options", 00:23:59.237 "params": { 00:23:59.237 "buf_count": 2048, 00:23:59.237 "large_cache_size": 16, 00:23:59.237 "sequence_count": 2048, 00:23:59.237 "small_cache_size": 128, 00:23:59.237 "task_count": 2048 00:23:59.237 } 00:23:59.237 } 00:23:59.237 ] 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "subsystem": "bdev", 00:23:59.237 "config": [ 00:23:59.237 { 00:23:59.237 "method": "bdev_set_options", 00:23:59.237 "params": { 00:23:59.237 "bdev_auto_examine": true, 00:23:59.237 "bdev_io_cache_size": 256, 00:23:59.237 "bdev_io_pool_size": 65535, 00:23:59.237 "iobuf_large_cache_size": 16, 00:23:59.237 "iobuf_small_cache_size": 128 00:23:59.237 } 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "method": "bdev_raid_set_options", 00:23:59.237 "params": { 00:23:59.237 "process_window_size_kb": 1024 00:23:59.237 } 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "method": "bdev_iscsi_set_options", 00:23:59.237 "params": { 00:23:59.237 "timeout_sec": 30 00:23:59.237 } 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "method": "bdev_nvme_set_options", 00:23:59.237 "params": { 00:23:59.237 "action_on_timeout": "none", 00:23:59.237 "allow_accel_sequence": false, 00:23:59.237 "arbitration_burst": 0, 00:23:59.237 "bdev_retry_count": 3, 00:23:59.237 "ctrlr_loss_timeout_sec": 0, 00:23:59.237 "delay_cmd_submit": true, 00:23:59.237 "dhchap_dhgroups": [ 00:23:59.237 "null", 00:23:59.237 "ffdhe2048", 00:23:59.237 "ffdhe3072", 00:23:59.237 "ffdhe4096", 00:23:59.237 "ffdhe6144", 00:23:59.237 "ffdhe8192" 00:23:59.237 ], 00:23:59.237 "dhchap_digests": [ 00:23:59.237 "sha256", 00:23:59.237 "sha384", 00:23:59.237 "sha512" 00:23:59.237 ], 00:23:59.237 "disable_auto_failback": false, 00:23:59.237 "fast_io_fail_timeout_sec": 0, 00:23:59.237 "generate_uuids": false, 00:23:59.237 "high_priority_weight": 0, 00:23:59.237 "io_path_stat": false, 00:23:59.237 "io_queue_requests": 0, 00:23:59.237 "keep_alive_timeout_ms": 10000, 00:23:59.237 "low_priority_weight": 0, 00:23:59.237 "medium_priority_weight": 0, 00:23:59.237 "nvme_adminq_poll_period_us": 10000, 00:23:59.237 "nvme_error_stat": false, 00:23:59.237 "nvme_ioq_poll_period_us": 0, 00:23:59.237 "rdma_cm_event_timeout_ms": 0, 00:23:59.237 "rdma_max_cq_size": 0, 00:23:59.237 "rdma_srq_size": 0, 00:23:59.237 "reconnect_delay_sec": 0, 00:23:59.237 "timeout_admin_us": 0, 00:23:59.237 "timeout_us": 0, 00:23:59.237 "transport_ack_timeout": 0, 00:23:59.237 "transport_retry_count": 4, 00:23:59.237 "transport_tos": 0 00:23:59.237 } 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "method": "bdev_nvme_set_hotplug", 00:23:59.237 "params": { 00:23:59.237 "enable": false, 00:23:59.237 "period_us": 100000 00:23:59.237 } 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "method": "bdev_malloc_create", 00:23:59.237 "params": { 00:23:59.237 "block_size": 4096, 00:23:59.237 "name": "malloc0", 00:23:59.237 "num_blocks": 8192, 00:23:59.237 "optimal_io_boundary": 0, 00:23:59.237 "physical_block_size": 4096, 00:23:59.237 "uuid": "f1fe242d-86cc-44cc-ac53-6cbf3a92b34b" 00:23:59.237 } 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "method": "bdev_wait_for_examine" 00:23:59.237 } 00:23:59.237 ] 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "subsystem": "nbd", 00:23:59.237 "config": [] 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "subsystem": "scheduler", 00:23:59.237 "config": [ 00:23:59.237 { 00:23:59.237 "method": "framework_set_scheduler", 00:23:59.237 "params": { 00:23:59.237 "name": "static" 00:23:59.237 } 00:23:59.237 } 00:23:59.237 ] 00:23:59.237 }, 00:23:59.237 { 00:23:59.237 "subsystem": "nvmf", 00:23:59.237 "config": [ 00:23:59.237 { 00:23:59.237 "method": "nvmf_set_config", 00:23:59.237 "params": { 00:23:59.238 "admin_cmd_passthru": { 00:23:59.238 "identify_ctrlr": false 00:23:59.238 }, 00:23:59.238 "discovery_filter": "match_any" 00:23:59.238 } 00:23:59.238 }, 00:23:59.238 { 00:23:59.238 "method": "nvmf_set_max_subsystems", 00:23:59.238 "params": { 00:23:59.238 "max_subsystems": 1024 00:23:59.238 } 00:23:59.238 }, 00:23:59.238 { 00:23:59.238 "method": "nvmf_set_crdt", 00:23:59.238 "params": { 00:23:59.238 "crdt1": 0, 00:23:59.238 "crdt2": 0, 00:23:59.238 "crdt3": 0 00:23:59.238 } 00:23:59.238 }, 00:23:59.238 { 00:23:59.238 "method": "nvmf_create_transport", 00:23:59.238 "params": { 00:23:59.238 "abort_timeout_sec": 1, 00:23:59.238 "ack_timeout": 0, 00:23:59.238 "buf_cache_size": 4294967295, 00:23:59.238 "c2h_success": false, 00:23:59.238 "data_wr_pool_size": 0, 00:23:59.238 "dif_insert_or_strip": false, 00:23:59.238 "in_capsule_data_size": 4096, 00:23:59.238 "io_unit_size": 131072, 00:23:59.238 "max_aq_depth": 128, 00:23:59.238 "max_io_qpairs_per_ctrlr": 127, 00:23:59.238 "max_io_size": 131072, 00:23:59.238 "max_queue_depth": 128, 00:23:59.238 "num_shared_buffers": 511, 00:23:59.238 "sock_priority": 0, 00:23:59.238 "trtype": "TCP", 00:23:59.238 "zcopy": false 00:23:59.238 } 00:23:59.238 }, 00:23:59.238 { 00:23:59.238 "method": "nvmf_create_subsystem", 00:23:59.238 "params": { 00:23:59.238 "allow_any_host": false, 00:23:59.238 "ana_reporting": false, 00:23:59.238 "max_cntlid": 65519, 00:23:59.238 "max_namespaces": 10, 00:23:59.238 "min_cntlid": 1, 00:23:59.238 "model_number": "SPDK bdev Controller", 00:23:59.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.238 "serial_number": "SPDK00000000000001" 00:23:59.238 } 00:23:59.238 }, 00:23:59.238 { 00:23:59.238 "method": "nvmf_subsystem_add_host", 00:23:59.238 "params": { 00:23:59.238 "host": "nqn.2016-06.io.spdk:host1", 00:23:59.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.238 "psk": "/tmp/tmp.F9heb1DsHD" 00:23:59.238 } 00:23:59.238 }, 00:23:59.238 { 00:23:59.238 "method": "nvmf_subsystem_add_ns", 00:23:59.238 "params": { 00:23:59.238 "namespace": { 00:23:59.238 "bdev_name": "malloc0", 00:23:59.238 "nguid": "F1FE242D86CC44CCAC536CBF3A92B34B", 00:23:59.238 "no_auto_visible": false, 00:23:59.238 "nsid": 1, 00:23:59.238 "uuid": "f1fe242d-86cc-44cc-ac53-6cbf3a92b34b" 00:23:59.238 }, 00:23:59.238 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:23:59.238 } 00:23:59.238 }, 00:23:59.238 { 00:23:59.238 "method": "nvmf_subsystem_add_listener", 00:23:59.238 "params": { 00:23:59.238 "listen_address": { 00:23:59.238 "adrfam": "IPv4", 00:23:59.238 "traddr": "10.0.0.2", 00:23:59.238 "trsvcid": "4420", 00:23:59.238 "trtype": "TCP" 00:23:59.238 }, 00:23:59.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.238 "secure_channel": true 00:23:59.238 } 00:23:59.238 } 00:23:59.238 ] 00:23:59.238 } 00:23:59.238 ] 00:23:59.238 }' 00:23:59.238 13:34:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:59.238 13:34:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:59.238 13:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:59.238 13:34:16 -- nvmf/common.sh@470 -- # nvmfpid=78403 00:23:59.238 13:34:16 -- nvmf/common.sh@471 -- # waitforlisten 78403 00:23:59.238 13:34:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:59.238 13:34:16 -- common/autotest_common.sh@817 -- # '[' -z 78403 ']' 00:23:59.238 13:34:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.238 13:34:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:59.238 13:34:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.238 13:34:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:59.238 13:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:59.238 [2024-04-26 13:34:16.548434] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:23:59.238 [2024-04-26 13:34:16.548547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.497 [2024-04-26 13:34:16.688365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.497 [2024-04-26 13:34:16.799543] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.497 [2024-04-26 13:34:16.799614] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.497 [2024-04-26 13:34:16.799627] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.497 [2024-04-26 13:34:16.799636] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.497 [2024-04-26 13:34:16.799644] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.497 [2024-04-26 13:34:16.799744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.756 [2024-04-26 13:34:17.021985] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.756 [2024-04-26 13:34:17.037940] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:59.756 [2024-04-26 13:34:17.053924] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.756 [2024-04-26 13:34:17.054156] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.323 13:34:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:00.323 13:34:17 -- common/autotest_common.sh@850 -- # return 0 00:24:00.323 13:34:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:00.323 13:34:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.323 13:34:17 -- common/autotest_common.sh@10 -- # set +x 00:24:00.323 13:34:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.323 13:34:17 -- target/tls.sh@207 -- # bdevperf_pid=78450 00:24:00.323 13:34:17 -- target/tls.sh@208 -- # waitforlisten 78450 /var/tmp/bdevperf.sock 00:24:00.323 13:34:17 -- common/autotest_common.sh@817 -- # '[' -z 78450 ']' 00:24:00.323 13:34:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.323 13:34:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.323 13:34:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.323 13:34:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.323 13:34:17 -- common/autotest_common.sh@10 -- # set +x 00:24:00.323 13:34:17 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:00.323 13:34:17 -- target/tls.sh@204 -- # echo '{ 00:24:00.323 "subsystems": [ 00:24:00.323 { 00:24:00.323 "subsystem": "keyring", 00:24:00.323 "config": [] 00:24:00.323 }, 00:24:00.323 { 00:24:00.323 "subsystem": "iobuf", 00:24:00.323 "config": [ 00:24:00.323 { 00:24:00.323 "method": "iobuf_set_options", 00:24:00.323 "params": { 00:24:00.323 "large_bufsize": 135168, 00:24:00.323 "large_pool_count": 1024, 00:24:00.323 "small_bufsize": 8192, 00:24:00.323 "small_pool_count": 8192 00:24:00.323 } 00:24:00.323 } 00:24:00.323 ] 00:24:00.323 }, 00:24:00.323 { 00:24:00.323 "subsystem": "sock", 00:24:00.323 "config": [ 00:24:00.323 { 00:24:00.323 "method": "sock_impl_set_options", 00:24:00.323 "params": { 00:24:00.323 "enable_ktls": false, 00:24:00.323 "enable_placement_id": 0, 00:24:00.323 "enable_quickack": false, 00:24:00.323 "enable_recv_pipe": true, 00:24:00.323 "enable_zerocopy_send_client": false, 00:24:00.323 "enable_zerocopy_send_server": true, 00:24:00.323 "impl_name": "posix", 00:24:00.323 "recv_buf_size": 2097152, 00:24:00.323 "send_buf_size": 2097152, 00:24:00.323 "tls_version": 0, 00:24:00.323 "zerocopy_threshold": 0 00:24:00.323 } 00:24:00.323 }, 00:24:00.323 { 00:24:00.323 "method": "sock_impl_set_options", 00:24:00.323 "params": { 00:24:00.323 "enable_ktls": false, 00:24:00.323 "enable_placement_id": 0, 00:24:00.323 "enable_quickack": false, 00:24:00.323 "enable_recv_pipe": true, 00:24:00.323 "enable_zerocopy_send_client": false, 00:24:00.323 "enable_zerocopy_send_server": true, 00:24:00.323 "impl_name": "ssl", 00:24:00.323 "recv_buf_size": 4096, 00:24:00.323 "send_buf_size": 4096, 00:24:00.323 "tls_version": 0, 00:24:00.323 "zerocopy_threshold": 0 00:24:00.323 } 00:24:00.323 } 00:24:00.323 ] 00:24:00.323 }, 00:24:00.323 { 00:24:00.323 "subsystem": "vmd", 00:24:00.323 "config": [] 00:24:00.323 }, 00:24:00.323 { 00:24:00.323 "subsystem": "accel", 00:24:00.323 "config": [ 00:24:00.323 { 00:24:00.323 "method": "accel_set_options", 00:24:00.323 "params": { 00:24:00.323 "buf_count": 2048, 00:24:00.323 "large_cache_size": 16, 00:24:00.323 "sequence_count": 2048, 00:24:00.323 "small_cache_size": 128, 00:24:00.323 "task_count": 2048 00:24:00.323 } 00:24:00.323 } 00:24:00.323 ] 00:24:00.323 }, 00:24:00.323 { 00:24:00.323 "subsystem": "bdev", 00:24:00.323 "config": [ 00:24:00.323 { 00:24:00.323 "method": "bdev_set_options", 00:24:00.323 "params": { 00:24:00.323 "bdev_auto_examine": true, 00:24:00.323 "bdev_io_cache_size": 256, 00:24:00.323 "bdev_io_pool_size": 65535, 00:24:00.323 "iobuf_large_cache_size": 16, 00:24:00.323 "iobuf_small_cache_size": 128 00:24:00.323 } 00:24:00.323 }, 00:24:00.323 { 00:24:00.323 "method": "bdev_raid_set_options", 00:24:00.323 "params": { 00:24:00.323 "process_window_size_kb": 1024 00:24:00.323 } 00:24:00.323 }, 00:24:00.324 { 00:24:00.324 "method": "bdev_iscsi_set_options", 00:24:00.324 "params": { 00:24:00.324 "timeout_sec": 30 00:24:00.324 } 00:24:00.324 }, 00:24:00.324 { 00:24:00.324 "method": "bdev_nvme_set_options", 00:24:00.324 "params": { 00:24:00.324 "action_on_timeout": "none", 00:24:00.324 "allow_accel_sequence": false, 00:24:00.324 "arbitration_burst": 0, 00:24:00.324 "bdev_retry_count": 3, 00:24:00.324 "ctrlr_loss_timeout_sec": 0, 00:24:00.324 "delay_cmd_submit": true, 00:24:00.324 "dhchap_dhgroups": [ 00:24:00.324 "null", 00:24:00.324 "ffdhe2048", 00:24:00.324 "ffdhe3072", 00:24:00.324 "ffdhe4096", 00:24:00.324 "ffdhe6144", 00:24:00.324 "ffdhe8192" 00:24:00.324 ], 00:24:00.324 "dhchap_digests": [ 00:24:00.324 "sha256", 00:24:00.324 "sha384", 00:24:00.324 "sha512" 00:24:00.324 ], 00:24:00.324 "disable_auto_failback": false, 00:24:00.324 "fast_io_fail_timeout_sec": 0, 00:24:00.324 "generate_uuids": false, 00:24:00.324 "high_priority_weight": 0, 00:24:00.324 "io_path_stat": false, 00:24:00.324 "io_queue_requests": 512, 00:24:00.324 "keep_alive_timeout_ms": 10000, 00:24:00.324 "low_priority_weight": 0, 00:24:00.324 "medium_priority_weight": 0, 00:24:00.324 "nvme_adminq_poll_period_us": 10000, 00:24:00.324 "nvme_error_stat": false, 00:24:00.324 "nvme_ioq_poll_period_us": 0, 00:24:00.324 "rdma_cm_event_timeout_ms": 0, 00:24:00.324 "rdma_max_cq_size": 0, 00:24:00.324 "rdma_srq_size": 0, 00:24:00.324 "reconnect_delay_sec": 0, 00:24:00.324 "timeout_admin_us": 0, 00:24:00.324 "timeout_us": 0, 00:24:00.324 "transport_ack_timeout": 0, 00:24:00.324 "transport_retry_count": 4, 00:24:00.324 "transport_tos": 0 00:24:00.324 } 00:24:00.324 }, 00:24:00.324 { 00:24:00.324 "method": "bdev_nvme_attach_controller", 00:24:00.324 "params": { 00:24:00.324 "adrfam": "IPv4", 00:24:00.324 "ctrlr_loss_timeout_sec": 0, 00:24:00.324 "ddgst": false, 00:24:00.324 "fast_io_fail_timeout_sec": 0, 00:24:00.324 "hdgst": false, 00:24:00.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.324 "name": "TLSTEST", 00:24:00.324 "prchk_guard": false, 00:24:00.324 "prchk_reftag": false, 00:24:00.324 "psk": "/tmp/tmp.F9heb1DsHD", 00:24:00.324 "reconnect_delay_sec": 0, 00:24:00.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.324 "traddr": "10.0.0.2", 00:24:00.324 "trsvcid": "4420", 00:24:00.324 "trtype": "TCP" 00:24:00.324 } 00:24:00.324 }, 00:24:00.324 { 00:24:00.324 "method": "bdev_nvme_set_hotplug", 00:24:00.324 "params": { 00:24:00.324 "enable": false, 00:24:00.324 "period_us": 100000 00:24:00.324 } 00:24:00.324 }, 00:24:00.324 { 00:24:00.324 "method": "bdev_wait_for_examine" 00:24:00.324 } 00:24:00.324 ] 00:24:00.324 }, 00:24:00.324 { 00:24:00.324 "subsystem": "nbd", 00:24:00.324 "config": [] 00:24:00.324 } 00:24:00.324 ] 00:24:00.324 }' 00:24:00.324 [2024-04-26 13:34:17.647726] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:00.324 [2024-04-26 13:34:17.647864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78450 ] 00:24:00.582 [2024-04-26 13:34:17.785258] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.582 [2024-04-26 13:34:17.916122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.840 [2024-04-26 13:34:18.079953] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:00.840 [2024-04-26 13:34:18.080093] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:01.406 13:34:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:01.406 13:34:18 -- common/autotest_common.sh@850 -- # return 0 00:24:01.406 13:34:18 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:01.406 Running I/O for 10 seconds... 00:24:11.427 00:24:11.427 Latency(us) 00:24:11.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.427 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.427 Verification LBA range: start 0x0 length 0x2000 00:24:11.427 TLSTESTn1 : 10.02 3850.22 15.04 0.00 0.00 33183.46 6255.71 41466.41 00:24:11.427 =================================================================================================================== 00:24:11.427 Total : 3850.22 15.04 0.00 0.00 33183.46 6255.71 41466.41 00:24:11.427 0 00:24:11.427 13:34:28 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.427 13:34:28 -- target/tls.sh@214 -- # killprocess 78450 00:24:11.427 13:34:28 -- common/autotest_common.sh@936 -- # '[' -z 78450 ']' 00:24:11.427 13:34:28 -- common/autotest_common.sh@940 -- # kill -0 78450 00:24:11.427 13:34:28 -- common/autotest_common.sh@941 -- # uname 00:24:11.428 13:34:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.428 13:34:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78450 00:24:11.428 13:34:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:11.428 13:34:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:11.428 killing process with pid 78450 00:24:11.428 13:34:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78450' 00:24:11.428 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.428 00:24:11.428 Latency(us) 00:24:11.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.428 =================================================================================================================== 00:24:11.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.428 13:34:28 -- common/autotest_common.sh@955 -- # kill 78450 00:24:11.428 [2024-04-26 13:34:28.824420] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.428 13:34:28 -- common/autotest_common.sh@960 -- # wait 78450 00:24:11.687 13:34:29 -- target/tls.sh@215 -- # killprocess 78403 00:24:11.687 13:34:29 -- common/autotest_common.sh@936 -- # '[' -z 78403 ']' 00:24:11.687 13:34:29 -- common/autotest_common.sh@940 -- # kill -0 78403 00:24:11.687 13:34:29 -- common/autotest_common.sh@941 -- # uname 00:24:11.687 13:34:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:11.687 13:34:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78403 00:24:11.687 13:34:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:11.687 13:34:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:11.687 killing process with pid 78403 00:24:11.687 13:34:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78403' 00:24:11.687 13:34:29 -- common/autotest_common.sh@955 -- # kill 78403 00:24:11.687 [2024-04-26 13:34:29.107708] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:11.687 13:34:29 -- common/autotest_common.sh@960 -- # wait 78403 00:24:11.946 13:34:29 -- target/tls.sh@218 -- # nvmfappstart 00:24:11.946 13:34:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:11.946 13:34:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:11.946 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:11.946 13:34:29 -- nvmf/common.sh@470 -- # nvmfpid=78601 00:24:11.946 13:34:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:11.946 13:34:29 -- nvmf/common.sh@471 -- # waitforlisten 78601 00:24:11.946 13:34:29 -- common/autotest_common.sh@817 -- # '[' -z 78601 ']' 00:24:11.946 13:34:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.946 13:34:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:11.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.946 13:34:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.946 13:34:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:11.946 13:34:29 -- common/autotest_common.sh@10 -- # set +x 00:24:12.204 [2024-04-26 13:34:29.429673] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:12.204 [2024-04-26 13:34:29.429761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.204 [2024-04-26 13:34:29.568373] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.463 [2024-04-26 13:34:29.696674] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.463 [2024-04-26 13:34:29.696752] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.463 [2024-04-26 13:34:29.696767] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.463 [2024-04-26 13:34:29.696794] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.463 [2024-04-26 13:34:29.696807] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.463 [2024-04-26 13:34:29.696843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.030 13:34:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:13.030 13:34:30 -- common/autotest_common.sh@850 -- # return 0 00:24:13.030 13:34:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:13.030 13:34:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:13.030 13:34:30 -- common/autotest_common.sh@10 -- # set +x 00:24:13.289 13:34:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.289 13:34:30 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.F9heb1DsHD 00:24:13.289 13:34:30 -- target/tls.sh@49 -- # local key=/tmp/tmp.F9heb1DsHD 00:24:13.289 13:34:30 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:13.289 [2024-04-26 13:34:30.711530] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.289 13:34:30 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:13.547 13:34:30 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:13.805 [2024-04-26 13:34:31.239633] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:13.805 [2024-04-26 13:34:31.239904] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.063 13:34:31 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:14.322 malloc0 00:24:14.322 13:34:31 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:14.580 13:34:31 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.F9heb1DsHD 00:24:14.838 [2024-04-26 13:34:32.155130] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:14.838 13:34:32 -- target/tls.sh@222 -- # bdevperf_pid=78705 00:24:14.838 13:34:32 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:14.838 13:34:32 -- target/tls.sh@225 -- # waitforlisten 78705 /var/tmp/bdevperf.sock 00:24:14.838 13:34:32 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:14.838 13:34:32 -- common/autotest_common.sh@817 -- # '[' -z 78705 ']' 00:24:14.838 13:34:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.838 13:34:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:14.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.838 13:34:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.838 13:34:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:14.838 13:34:32 -- common/autotest_common.sh@10 -- # set +x 00:24:14.838 [2024-04-26 13:34:32.231500] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:14.838 [2024-04-26 13:34:32.231602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78705 ] 00:24:15.096 [2024-04-26 13:34:32.366386] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.096 [2024-04-26 13:34:32.485355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.032 13:34:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:16.032 13:34:33 -- common/autotest_common.sh@850 -- # return 0 00:24:16.032 13:34:33 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F9heb1DsHD 00:24:16.032 13:34:33 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:16.291 [2024-04-26 13:34:33.635087] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.291 nvme0n1 00:24:16.291 13:34:33 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.549 Running I/O for 1 seconds... 00:24:17.485 00:24:17.485 Latency(us) 00:24:17.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.485 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:17.485 Verification LBA range: start 0x0 length 0x2000 00:24:17.485 nvme0n1 : 1.03 3812.02 14.89 0.00 0.00 33084.77 7536.64 20614.05 00:24:17.485 =================================================================================================================== 00:24:17.485 Total : 3812.02 14.89 0.00 0.00 33084.77 7536.64 20614.05 00:24:17.485 0 00:24:17.485 13:34:34 -- target/tls.sh@234 -- # killprocess 78705 00:24:17.485 13:34:34 -- common/autotest_common.sh@936 -- # '[' -z 78705 ']' 00:24:17.485 13:34:34 -- common/autotest_common.sh@940 -- # kill -0 78705 00:24:17.485 13:34:34 -- common/autotest_common.sh@941 -- # uname 00:24:17.485 13:34:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.485 13:34:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78705 00:24:17.485 13:34:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:17.485 13:34:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:17.485 13:34:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78705' 00:24:17.485 killing process with pid 78705 00:24:17.485 13:34:34 -- common/autotest_common.sh@955 -- # kill 78705 00:24:17.485 Received shutdown signal, test time was about 1.000000 seconds 00:24:17.485 00:24:17.485 Latency(us) 00:24:17.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.485 =================================================================================================================== 00:24:17.485 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.485 13:34:34 -- common/autotest_common.sh@960 -- # wait 78705 00:24:17.743 13:34:35 -- target/tls.sh@235 -- # killprocess 78601 00:24:17.743 13:34:35 -- common/autotest_common.sh@936 -- # '[' -z 78601 ']' 00:24:17.743 13:34:35 -- common/autotest_common.sh@940 -- # kill -0 78601 00:24:17.743 13:34:35 -- common/autotest_common.sh@941 -- # uname 00:24:17.743 13:34:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:17.743 13:34:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78601 00:24:18.002 13:34:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:18.002 13:34:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:18.002 13:34:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78601' 00:24:18.002 killing process with pid 78601 00:24:18.002 13:34:35 -- common/autotest_common.sh@955 -- # kill 78601 00:24:18.002 [2024-04-26 13:34:35.193582] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:18.002 13:34:35 -- common/autotest_common.sh@960 -- # wait 78601 00:24:18.260 13:34:35 -- target/tls.sh@238 -- # nvmfappstart 00:24:18.260 13:34:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:18.260 13:34:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:18.260 13:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.260 13:34:35 -- nvmf/common.sh@470 -- # nvmfpid=78779 00:24:18.260 13:34:35 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:18.260 13:34:35 -- nvmf/common.sh@471 -- # waitforlisten 78779 00:24:18.260 13:34:35 -- common/autotest_common.sh@817 -- # '[' -z 78779 ']' 00:24:18.260 13:34:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.260 13:34:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:18.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.260 13:34:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.260 13:34:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:18.260 13:34:35 -- common/autotest_common.sh@10 -- # set +x 00:24:18.260 [2024-04-26 13:34:35.528203] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:18.260 [2024-04-26 13:34:35.528317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.260 [2024-04-26 13:34:35.664263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.518 [2024-04-26 13:34:35.782360] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.519 [2024-04-26 13:34:35.782418] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.519 [2024-04-26 13:34:35.782431] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.519 [2024-04-26 13:34:35.782440] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.519 [2024-04-26 13:34:35.782448] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.519 [2024-04-26 13:34:35.782490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.464 13:34:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:19.464 13:34:36 -- common/autotest_common.sh@850 -- # return 0 00:24:19.464 13:34:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:19.464 13:34:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:19.464 13:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.464 13:34:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.465 13:34:36 -- target/tls.sh@239 -- # rpc_cmd 00:24:19.465 13:34:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.465 13:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.465 [2024-04-26 13:34:36.608758] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.465 malloc0 00:24:19.465 [2024-04-26 13:34:36.640668] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:19.465 [2024-04-26 13:34:36.640926] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.465 13:34:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.465 13:34:36 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:19.465 13:34:36 -- target/tls.sh@252 -- # bdevperf_pid=78829 00:24:19.465 13:34:36 -- target/tls.sh@254 -- # waitforlisten 78829 /var/tmp/bdevperf.sock 00:24:19.465 13:34:36 -- common/autotest_common.sh@817 -- # '[' -z 78829 ']' 00:24:19.465 13:34:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.465 13:34:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:19.465 13:34:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.465 13:34:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:19.465 13:34:36 -- common/autotest_common.sh@10 -- # set +x 00:24:19.465 [2024-04-26 13:34:36.732845] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:19.465 [2024-04-26 13:34:36.732973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78829 ] 00:24:19.465 [2024-04-26 13:34:36.876665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.728 [2024-04-26 13:34:37.005149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.673 13:34:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:20.673 13:34:37 -- common/autotest_common.sh@850 -- # return 0 00:24:20.673 13:34:37 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.F9heb1DsHD 00:24:20.673 13:34:38 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:20.932 [2024-04-26 13:34:38.369653] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.190 nvme0n1 00:24:21.190 13:34:38 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:21.190 Running I/O for 1 seconds... 00:24:22.577 00:24:22.577 Latency(us) 00:24:22.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.577 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:22.577 Verification LBA range: start 0x0 length 0x2000 00:24:22.577 nvme0n1 : 1.03 3696.88 14.44 0.00 0.00 34125.54 7536.64 20971.52 00:24:22.577 =================================================================================================================== 00:24:22.577 Total : 3696.88 14.44 0.00 0.00 34125.54 7536.64 20971.52 00:24:22.577 0 00:24:22.577 13:34:39 -- target/tls.sh@263 -- # rpc_cmd save_config 00:24:22.577 13:34:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.578 13:34:39 -- common/autotest_common.sh@10 -- # set +x 00:24:22.578 13:34:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.578 13:34:39 -- target/tls.sh@263 -- # tgtcfg='{ 00:24:22.578 "subsystems": [ 00:24:22.578 { 00:24:22.578 "subsystem": "keyring", 00:24:22.578 "config": [ 00:24:22.578 { 00:24:22.578 "method": "keyring_file_add_key", 00:24:22.578 "params": { 00:24:22.578 "name": "key0", 00:24:22.578 "path": "/tmp/tmp.F9heb1DsHD" 00:24:22.578 } 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "iobuf", 00:24:22.578 "config": [ 00:24:22.578 { 00:24:22.578 "method": "iobuf_set_options", 00:24:22.578 "params": { 00:24:22.578 "large_bufsize": 135168, 00:24:22.578 "large_pool_count": 1024, 00:24:22.578 "small_bufsize": 8192, 00:24:22.578 "small_pool_count": 8192 00:24:22.578 } 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "sock", 00:24:22.578 "config": [ 00:24:22.578 { 00:24:22.578 "method": "sock_impl_set_options", 00:24:22.578 "params": { 00:24:22.578 "enable_ktls": false, 00:24:22.578 "enable_placement_id": 0, 00:24:22.578 "enable_quickack": false, 00:24:22.578 "enable_recv_pipe": true, 00:24:22.578 "enable_zerocopy_send_client": false, 00:24:22.578 "enable_zerocopy_send_server": true, 00:24:22.578 "impl_name": "posix", 00:24:22.578 "recv_buf_size": 2097152, 00:24:22.578 "send_buf_size": 2097152, 00:24:22.578 "tls_version": 0, 00:24:22.578 "zerocopy_threshold": 0 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "sock_impl_set_options", 00:24:22.578 "params": { 00:24:22.578 "enable_ktls": false, 00:24:22.578 "enable_placement_id": 0, 00:24:22.578 "enable_quickack": false, 00:24:22.578 "enable_recv_pipe": true, 00:24:22.578 "enable_zerocopy_send_client": false, 00:24:22.578 "enable_zerocopy_send_server": true, 00:24:22.578 "impl_name": "ssl", 00:24:22.578 "recv_buf_size": 4096, 00:24:22.578 "send_buf_size": 4096, 00:24:22.578 "tls_version": 0, 00:24:22.578 "zerocopy_threshold": 0 00:24:22.578 } 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "vmd", 00:24:22.578 "config": [] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "accel", 00:24:22.578 "config": [ 00:24:22.578 { 00:24:22.578 "method": "accel_set_options", 00:24:22.578 "params": { 00:24:22.578 "buf_count": 2048, 00:24:22.578 "large_cache_size": 16, 00:24:22.578 "sequence_count": 2048, 00:24:22.578 "small_cache_size": 128, 00:24:22.578 "task_count": 2048 00:24:22.578 } 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "bdev", 00:24:22.578 "config": [ 00:24:22.578 { 00:24:22.578 "method": "bdev_set_options", 00:24:22.578 "params": { 00:24:22.578 "bdev_auto_examine": true, 00:24:22.578 "bdev_io_cache_size": 256, 00:24:22.578 "bdev_io_pool_size": 65535, 00:24:22.578 "iobuf_large_cache_size": 16, 00:24:22.578 "iobuf_small_cache_size": 128 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "bdev_raid_set_options", 00:24:22.578 "params": { 00:24:22.578 "process_window_size_kb": 1024 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "bdev_iscsi_set_options", 00:24:22.578 "params": { 00:24:22.578 "timeout_sec": 30 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "bdev_nvme_set_options", 00:24:22.578 "params": { 00:24:22.578 "action_on_timeout": "none", 00:24:22.578 "allow_accel_sequence": false, 00:24:22.578 "arbitration_burst": 0, 00:24:22.578 "bdev_retry_count": 3, 00:24:22.578 "ctrlr_loss_timeout_sec": 0, 00:24:22.578 "delay_cmd_submit": true, 00:24:22.578 "dhchap_dhgroups": [ 00:24:22.578 "null", 00:24:22.578 "ffdhe2048", 00:24:22.578 "ffdhe3072", 00:24:22.578 "ffdhe4096", 00:24:22.578 "ffdhe6144", 00:24:22.578 "ffdhe8192" 00:24:22.578 ], 00:24:22.578 "dhchap_digests": [ 00:24:22.578 "sha256", 00:24:22.578 "sha384", 00:24:22.578 "sha512" 00:24:22.578 ], 00:24:22.578 "disable_auto_failback": false, 00:24:22.578 "fast_io_fail_timeout_sec": 0, 00:24:22.578 "generate_uuids": false, 00:24:22.578 "high_priority_weight": 0, 00:24:22.578 "io_path_stat": false, 00:24:22.578 "io_queue_requests": 0, 00:24:22.578 "keep_alive_timeout_ms": 10000, 00:24:22.578 "low_priority_weight": 0, 00:24:22.578 "medium_priority_weight": 0, 00:24:22.578 "nvme_adminq_poll_period_us": 10000, 00:24:22.578 "nvme_error_stat": false, 00:24:22.578 "nvme_ioq_poll_period_us": 0, 00:24:22.578 "rdma_cm_event_timeout_ms": 0, 00:24:22.578 "rdma_max_cq_size": 0, 00:24:22.578 "rdma_srq_size": 0, 00:24:22.578 "reconnect_delay_sec": 0, 00:24:22.578 "timeout_admin_us": 0, 00:24:22.578 "timeout_us": 0, 00:24:22.578 "transport_ack_timeout": 0, 00:24:22.578 "transport_retry_count": 4, 00:24:22.578 "transport_tos": 0 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "bdev_nvme_set_hotplug", 00:24:22.578 "params": { 00:24:22.578 "enable": false, 00:24:22.578 "period_us": 100000 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "bdev_malloc_create", 00:24:22.578 "params": { 00:24:22.578 "block_size": 4096, 00:24:22.578 "name": "malloc0", 00:24:22.578 "num_blocks": 8192, 00:24:22.578 "optimal_io_boundary": 0, 00:24:22.578 "physical_block_size": 4096, 00:24:22.578 "uuid": "6f70bc65-0d40-4628-8e0b-f55f03368bb3" 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "bdev_wait_for_examine" 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "nbd", 00:24:22.578 "config": [] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "scheduler", 00:24:22.578 "config": [ 00:24:22.578 { 00:24:22.578 "method": "framework_set_scheduler", 00:24:22.578 "params": { 00:24:22.578 "name": "static" 00:24:22.578 } 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "subsystem": "nvmf", 00:24:22.578 "config": [ 00:24:22.578 { 00:24:22.578 "method": "nvmf_set_config", 00:24:22.578 "params": { 00:24:22.578 "admin_cmd_passthru": { 00:24:22.578 "identify_ctrlr": false 00:24:22.578 }, 00:24:22.578 "discovery_filter": "match_any" 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "nvmf_set_max_subsystems", 00:24:22.578 "params": { 00:24:22.578 "max_subsystems": 1024 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "nvmf_set_crdt", 00:24:22.578 "params": { 00:24:22.578 "crdt1": 0, 00:24:22.578 "crdt2": 0, 00:24:22.578 "crdt3": 0 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "nvmf_create_transport", 00:24:22.578 "params": { 00:24:22.578 "abort_timeout_sec": 1, 00:24:22.578 "ack_timeout": 0, 00:24:22.578 "buf_cache_size": 4294967295, 00:24:22.578 "c2h_success": false, 00:24:22.578 "data_wr_pool_size": 0, 00:24:22.578 "dif_insert_or_strip": false, 00:24:22.578 "in_capsule_data_size": 4096, 00:24:22.578 "io_unit_size": 131072, 00:24:22.578 "max_aq_depth": 128, 00:24:22.578 "max_io_qpairs_per_ctrlr": 127, 00:24:22.578 "max_io_size": 131072, 00:24:22.578 "max_queue_depth": 128, 00:24:22.578 "num_shared_buffers": 511, 00:24:22.578 "sock_priority": 0, 00:24:22.578 "trtype": "TCP", 00:24:22.578 "zcopy": false 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "nvmf_create_subsystem", 00:24:22.578 "params": { 00:24:22.578 "allow_any_host": false, 00:24:22.578 "ana_reporting": false, 00:24:22.578 "max_cntlid": 65519, 00:24:22.578 "max_namespaces": 32, 00:24:22.578 "min_cntlid": 1, 00:24:22.578 "model_number": "SPDK bdev Controller", 00:24:22.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.578 "serial_number": "00000000000000000000" 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "nvmf_subsystem_add_host", 00:24:22.578 "params": { 00:24:22.578 "host": "nqn.2016-06.io.spdk:host1", 00:24:22.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.578 "psk": "key0" 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "nvmf_subsystem_add_ns", 00:24:22.578 "params": { 00:24:22.578 "namespace": { 00:24:22.578 "bdev_name": "malloc0", 00:24:22.578 "nguid": "6F70BC650D4046288E0BF55F03368BB3", 00:24:22.578 "no_auto_visible": false, 00:24:22.578 "nsid": 1, 00:24:22.578 "uuid": "6f70bc65-0d40-4628-8e0b-f55f03368bb3" 00:24:22.578 }, 00:24:22.578 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:22.578 } 00:24:22.578 }, 00:24:22.578 { 00:24:22.578 "method": "nvmf_subsystem_add_listener", 00:24:22.578 "params": { 00:24:22.578 "listen_address": { 00:24:22.578 "adrfam": "IPv4", 00:24:22.578 "traddr": "10.0.0.2", 00:24:22.578 "trsvcid": "4420", 00:24:22.578 "trtype": "TCP" 00:24:22.578 }, 00:24:22.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.578 "secure_channel": true 00:24:22.578 } 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 } 00:24:22.578 ] 00:24:22.578 }' 00:24:22.579 13:34:39 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:22.857 13:34:40 -- target/tls.sh@264 -- # bperfcfg='{ 00:24:22.857 "subsystems": [ 00:24:22.857 { 00:24:22.857 "subsystem": "keyring", 00:24:22.857 "config": [ 00:24:22.857 { 00:24:22.857 "method": "keyring_file_add_key", 00:24:22.857 "params": { 00:24:22.857 "name": "key0", 00:24:22.857 "path": "/tmp/tmp.F9heb1DsHD" 00:24:22.857 } 00:24:22.857 } 00:24:22.857 ] 00:24:22.857 }, 00:24:22.857 { 00:24:22.857 "subsystem": "iobuf", 00:24:22.857 "config": [ 00:24:22.857 { 00:24:22.857 "method": "iobuf_set_options", 00:24:22.857 "params": { 00:24:22.857 "large_bufsize": 135168, 00:24:22.857 "large_pool_count": 1024, 00:24:22.857 "small_bufsize": 8192, 00:24:22.857 "small_pool_count": 8192 00:24:22.857 } 00:24:22.857 } 00:24:22.857 ] 00:24:22.857 }, 00:24:22.857 { 00:24:22.857 "subsystem": "sock", 00:24:22.857 "config": [ 00:24:22.857 { 00:24:22.857 "method": "sock_impl_set_options", 00:24:22.857 "params": { 00:24:22.857 "enable_ktls": false, 00:24:22.857 "enable_placement_id": 0, 00:24:22.857 "enable_quickack": false, 00:24:22.857 "enable_recv_pipe": true, 00:24:22.857 "enable_zerocopy_send_client": false, 00:24:22.857 "enable_zerocopy_send_server": true, 00:24:22.857 "impl_name": "posix", 00:24:22.857 "recv_buf_size": 2097152, 00:24:22.857 "send_buf_size": 2097152, 00:24:22.857 "tls_version": 0, 00:24:22.857 "zerocopy_threshold": 0 00:24:22.857 } 00:24:22.857 }, 00:24:22.857 { 00:24:22.857 "method": "sock_impl_set_options", 00:24:22.857 "params": { 00:24:22.857 "enable_ktls": false, 00:24:22.857 "enable_placement_id": 0, 00:24:22.857 "enable_quickack": false, 00:24:22.857 "enable_recv_pipe": true, 00:24:22.857 "enable_zerocopy_send_client": false, 00:24:22.857 "enable_zerocopy_send_server": true, 00:24:22.857 "impl_name": "ssl", 00:24:22.857 "recv_buf_size": 4096, 00:24:22.857 "send_buf_size": 4096, 00:24:22.857 "tls_version": 0, 00:24:22.857 "zerocopy_threshold": 0 00:24:22.857 } 00:24:22.857 } 00:24:22.857 ] 00:24:22.857 }, 00:24:22.857 { 00:24:22.857 "subsystem": "vmd", 00:24:22.857 "config": [] 00:24:22.857 }, 00:24:22.857 { 00:24:22.857 "subsystem": "accel", 00:24:22.857 "config": [ 00:24:22.857 { 00:24:22.857 "method": "accel_set_options", 00:24:22.857 "params": { 00:24:22.857 "buf_count": 2048, 00:24:22.857 "large_cache_size": 16, 00:24:22.857 "sequence_count": 2048, 00:24:22.857 "small_cache_size": 128, 00:24:22.857 "task_count": 2048 00:24:22.858 } 00:24:22.858 } 00:24:22.858 ] 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "subsystem": "bdev", 00:24:22.858 "config": [ 00:24:22.858 { 00:24:22.858 "method": "bdev_set_options", 00:24:22.858 "params": { 00:24:22.858 "bdev_auto_examine": true, 00:24:22.858 "bdev_io_cache_size": 256, 00:24:22.858 "bdev_io_pool_size": 65535, 00:24:22.858 "iobuf_large_cache_size": 16, 00:24:22.858 "iobuf_small_cache_size": 128 00:24:22.858 } 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "method": "bdev_raid_set_options", 00:24:22.858 "params": { 00:24:22.858 "process_window_size_kb": 1024 00:24:22.858 } 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "method": "bdev_iscsi_set_options", 00:24:22.858 "params": { 00:24:22.858 "timeout_sec": 30 00:24:22.858 } 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "method": "bdev_nvme_set_options", 00:24:22.858 "params": { 00:24:22.858 "action_on_timeout": "none", 00:24:22.858 "allow_accel_sequence": false, 00:24:22.858 "arbitration_burst": 0, 00:24:22.858 "bdev_retry_count": 3, 00:24:22.858 "ctrlr_loss_timeout_sec": 0, 00:24:22.858 "delay_cmd_submit": true, 00:24:22.858 "dhchap_dhgroups": [ 00:24:22.858 "null", 00:24:22.858 "ffdhe2048", 00:24:22.858 "ffdhe3072", 00:24:22.858 "ffdhe4096", 00:24:22.858 "ffdhe6144", 00:24:22.858 "ffdhe8192" 00:24:22.858 ], 00:24:22.858 "dhchap_digests": [ 00:24:22.858 "sha256", 00:24:22.858 "sha384", 00:24:22.858 "sha512" 00:24:22.858 ], 00:24:22.858 "disable_auto_failback": false, 00:24:22.858 "fast_io_fail_timeout_sec": 0, 00:24:22.858 "generate_uuids": false, 00:24:22.858 "high_priority_weight": 0, 00:24:22.858 "io_path_stat": false, 00:24:22.858 "io_queue_requests": 512, 00:24:22.858 "keep_alive_timeout_ms": 10000, 00:24:22.858 "low_priority_weight": 0, 00:24:22.858 "medium_priority_weight": 0, 00:24:22.858 "nvme_adminq_poll_period_us": 10000, 00:24:22.858 "nvme_error_stat": false, 00:24:22.858 "nvme_ioq_poll_period_us": 0, 00:24:22.858 "rdma_cm_event_timeout_ms": 0, 00:24:22.858 "rdma_max_cq_size": 0, 00:24:22.858 "rdma_srq_size": 0, 00:24:22.858 "reconnect_delay_sec": 0, 00:24:22.858 "timeout_admin_us": 0, 00:24:22.858 "timeout_us": 0, 00:24:22.858 "transport_ack_timeout": 0, 00:24:22.858 "transport_retry_count": 4, 00:24:22.858 "transport_tos": 0 00:24:22.858 } 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "method": "bdev_nvme_attach_controller", 00:24:22.858 "params": { 00:24:22.858 "adrfam": "IPv4", 00:24:22.858 "ctrlr_loss_timeout_sec": 0, 00:24:22.858 "ddgst": false, 00:24:22.858 "fast_io_fail_timeout_sec": 0, 00:24:22.858 "hdgst": false, 00:24:22.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:22.858 "name": "nvme0", 00:24:22.858 "prchk_guard": false, 00:24:22.858 "prchk_reftag": false, 00:24:22.858 "psk": "key0", 00:24:22.858 "reconnect_delay_sec": 0, 00:24:22.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.858 "traddr": "10.0.0.2", 00:24:22.858 "trsvcid": "4420", 00:24:22.858 "trtype": "TCP" 00:24:22.858 } 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "method": "bdev_nvme_set_hotplug", 00:24:22.858 "params": { 00:24:22.858 "enable": false, 00:24:22.858 "period_us": 100000 00:24:22.858 } 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "method": "bdev_enable_histogram", 00:24:22.858 "params": { 00:24:22.858 "enable": true, 00:24:22.858 "name": "nvme0n1" 00:24:22.858 } 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "method": "bdev_wait_for_examine" 00:24:22.858 } 00:24:22.858 ] 00:24:22.858 }, 00:24:22.858 { 00:24:22.858 "subsystem": "nbd", 00:24:22.858 "config": [] 00:24:22.858 } 00:24:22.858 ] 00:24:22.858 }' 00:24:22.858 13:34:40 -- target/tls.sh@266 -- # killprocess 78829 00:24:22.858 13:34:40 -- common/autotest_common.sh@936 -- # '[' -z 78829 ']' 00:24:22.858 13:34:40 -- common/autotest_common.sh@940 -- # kill -0 78829 00:24:22.858 13:34:40 -- common/autotest_common.sh@941 -- # uname 00:24:22.858 13:34:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.858 13:34:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78829 00:24:22.858 13:34:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:22.858 13:34:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:22.858 killing process with pid 78829 00:24:22.858 13:34:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78829' 00:24:22.858 13:34:40 -- common/autotest_common.sh@955 -- # kill 78829 00:24:22.858 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.858 00:24:22.858 Latency(us) 00:24:22.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.858 =================================================================================================================== 00:24:22.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.858 13:34:40 -- common/autotest_common.sh@960 -- # wait 78829 00:24:23.118 13:34:40 -- target/tls.sh@267 -- # killprocess 78779 00:24:23.118 13:34:40 -- common/autotest_common.sh@936 -- # '[' -z 78779 ']' 00:24:23.118 13:34:40 -- common/autotest_common.sh@940 -- # kill -0 78779 00:24:23.118 13:34:40 -- common/autotest_common.sh@941 -- # uname 00:24:23.118 13:34:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.118 13:34:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78779 00:24:23.118 13:34:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:23.118 13:34:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:23.118 killing process with pid 78779 00:24:23.118 13:34:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78779' 00:24:23.118 13:34:40 -- common/autotest_common.sh@955 -- # kill 78779 00:24:23.118 13:34:40 -- common/autotest_common.sh@960 -- # wait 78779 00:24:23.377 13:34:40 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:24:23.377 13:34:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:23.377 13:34:40 -- target/tls.sh@269 -- # echo '{ 00:24:23.377 "subsystems": [ 00:24:23.377 { 00:24:23.377 "subsystem": "keyring", 00:24:23.377 "config": [ 00:24:23.377 { 00:24:23.377 "method": "keyring_file_add_key", 00:24:23.377 "params": { 00:24:23.377 "name": "key0", 00:24:23.377 "path": "/tmp/tmp.F9heb1DsHD" 00:24:23.377 } 00:24:23.377 } 00:24:23.377 ] 00:24:23.377 }, 00:24:23.377 { 00:24:23.377 "subsystem": "iobuf", 00:24:23.377 "config": [ 00:24:23.377 { 00:24:23.377 "method": "iobuf_set_options", 00:24:23.377 "params": { 00:24:23.377 "large_bufsize": 135168, 00:24:23.377 "large_pool_count": 1024, 00:24:23.377 "small_bufsize": 8192, 00:24:23.377 "small_pool_count": 8192 00:24:23.377 } 00:24:23.377 } 00:24:23.377 ] 00:24:23.377 }, 00:24:23.377 { 00:24:23.377 "subsystem": "sock", 00:24:23.377 "config": [ 00:24:23.377 { 00:24:23.377 "method": "sock_impl_set_options", 00:24:23.377 "params": { 00:24:23.377 "enable_ktls": false, 00:24:23.377 "enable_placement_id": 0, 00:24:23.377 "enable_quickack": false, 00:24:23.377 "enable_recv_pipe": true, 00:24:23.377 "enable_zerocopy_send_client": false, 00:24:23.377 "enable_zerocopy_send_server": true, 00:24:23.377 "impl_name": "posix", 00:24:23.377 "recv_buf_size": 2097152, 00:24:23.377 "send_buf_size": 2097152, 00:24:23.377 "tls_version": 0, 00:24:23.377 "zerocopy_threshold": 0 00:24:23.377 } 00:24:23.377 }, 00:24:23.377 { 00:24:23.377 "method": "sock_impl_set_options", 00:24:23.377 "params": { 00:24:23.377 "enable_ktls": false, 00:24:23.377 "enable_placement_id": 0, 00:24:23.377 "enable_quickack": false, 00:24:23.377 "enable_recv_pipe": true, 00:24:23.377 "enable_zerocopy_send_client": false, 00:24:23.377 "enable_zerocopy_send_server": true, 00:24:23.377 "impl_name": "ssl", 00:24:23.377 "recv_buf_size": 4096, 00:24:23.377 "send_buf_size": 4096, 00:24:23.377 "tls_version": 0, 00:24:23.377 "zerocopy_threshold": 0 00:24:23.377 } 00:24:23.377 } 00:24:23.377 ] 00:24:23.377 }, 00:24:23.377 { 00:24:23.377 "subsystem": "vmd", 00:24:23.377 "config": [] 00:24:23.377 }, 00:24:23.377 { 00:24:23.377 "subsystem": "accel", 00:24:23.377 "config": [ 00:24:23.377 { 00:24:23.377 "method": "accel_set_options", 00:24:23.377 "params": { 00:24:23.377 "buf_count": 2048, 00:24:23.377 "large_cache_size": 16, 00:24:23.377 "sequence_count": 2048, 00:24:23.377 "small_cache_size": 128, 00:24:23.377 "task_count": 2048 00:24:23.377 } 00:24:23.377 } 00:24:23.377 ] 00:24:23.377 }, 00:24:23.377 { 00:24:23.377 "subsystem": "bdev", 00:24:23.377 "config": [ 00:24:23.377 { 00:24:23.377 "method": "bdev_set_options", 00:24:23.377 "params": { 00:24:23.377 "bdev_auto_examine": true, 00:24:23.378 "bdev_io_cache_size": 256, 00:24:23.378 "bdev_io_pool_size": 65535, 00:24:23.378 "iobuf_large_cache_size": 16, 00:24:23.378 "iobuf_small_cache_size": 128 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "bdev_raid_set_options", 00:24:23.378 "params": { 00:24:23.378 "process_window_size_kb": 1024 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "bdev_iscsi_set_options", 00:24:23.378 "params": { 00:24:23.378 "timeout_sec": 30 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "bdev_nvme_set_options", 00:24:23.378 "params": { 00:24:23.378 "action_on_timeout": "none", 00:24:23.378 "allow_accel_sequence": false, 00:24:23.378 "arbitration_burst": 0, 00:24:23.378 "bdev_retry_count": 3, 00:24:23.378 "ctrlr_loss_timeout_sec": 0, 00:24:23.378 "delay_cmd_submit": true, 00:24:23.378 "dhchap_dhgroups": [ 00:24:23.378 "null", 00:24:23.378 "ffdhe2048", 00:24:23.378 "ffdhe3072", 00:24:23.378 "ffdhe4096", 00:24:23.378 "ffdhe6144", 00:24:23.378 "ffdhe8192" 00:24:23.378 ], 00:24:23.378 "dhchap_digests": [ 00:24:23.378 "sha256", 00:24:23.378 "sha384", 00:24:23.378 "sha512" 00:24:23.378 ], 00:24:23.378 "disable_auto_failback": false, 00:24:23.378 "fast_io_fail_timeout_sec": 0, 00:24:23.378 "generate_uuids": false, 00:24:23.378 "high_priority_weight": 0, 00:24:23.378 "io_path_stat": false, 00:24:23.378 "io_queue_requests": 0, 00:24:23.378 "keep_alive_timeout_ms": 10000, 00:24:23.378 "low_priority_weight": 0, 00:24:23.378 "medium_priority_weight": 0, 00:24:23.378 "nvme_adminq_poll_period_us": 10000, 00:24:23.378 "nvme_error_stat": false, 00:24:23.378 "nvme_ioq_poll_period_us": 0, 00:24:23.378 "rdma_cm_event_timeout_ms": 0, 00:24:23.378 "rdma_max_cq_size": 0, 00:24:23.378 "rdma_srq_size": 0, 00:24:23.378 "reconnect_delay_sec": 0, 00:24:23.378 "timeout_admin_us": 0, 00:24:23.378 "timeout_us": 0, 00:24:23.378 "transport_ack_timeout": 0, 00:24:23.378 "transport_retry_count": 4, 00:24:23.378 "transport_tos": 0 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "bdev_nvme_set_hotplug", 00:24:23.378 "params": { 00:24:23.378 "enable": false, 00:24:23.378 "period_us": 100000 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "bdev_malloc_create", 00:24:23.378 "params": { 00:24:23.378 "block_size": 4096, 00:24:23.378 "name": "malloc0", 00:24:23.378 "num_blocks": 8192, 00:24:23.378 "optimal_io_boundary": 0, 00:24:23.378 "physical_block_size": 4096, 00:24:23.378 "uuid": "6f70bc65-0d40-4628-8e0b-f55f03368bb3" 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "bdev_wait_for_examine" 00:24:23.378 } 00:24:23.378 ] 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "subsystem": "nbd", 00:24:23.378 "config": [] 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "subsystem": "scheduler", 00:24:23.378 "config": [ 00:24:23.378 { 00:24:23.378 "method": "framework_set_scheduler", 00:24:23.378 "params": { 00:24:23.378 "name": "static" 00:24:23.378 } 00:24:23.378 } 00:24:23.378 ] 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "subsystem": "nvmf", 00:24:23.378 "config": [ 00:24:23.378 { 00:24:23.378 "method": "nvmf_set_config", 00:24:23.378 "params": { 00:24:23.378 "admin_cmd_passthru": { 00:24:23.378 "identify_ctrlr": false 00:24:23.378 }, 00:24:23.378 "discovery_filter": "match_any" 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "nvmf_set_max_subsystems", 00:24:23.378 "params": { 00:24:23.378 "max_subsystems": 1024 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "nvmf_set_crdt", 00:24:23.378 "params": { 00:24:23.378 "crdt1": 0, 00:24:23.378 "crdt2": 0, 00:24:23.378 "crdt3": 0 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "nvmf_create_transport", 00:24:23.378 "params": { 00:24:23.378 "abort_timeout_sec": 1, 00:24:23.378 "ack_timeout": 0, 00:24:23.378 "buf_cache_size": 4294967295, 00:24:23.378 "c2h_success": false, 00:24:23.378 "data_wr_pool_size": 0, 00:24:23.378 "dif_insert_or_strip": false, 00:24:23.378 "in_capsule_data_size": 4096, 00:24:23.378 "io_unit_size": 131072, 00:24:23.378 "max_aq_depth": 128, 00:24:23.378 "max_io_qpairs_per_ctrlr": 127, 00:24:23.378 "max_io_size": 131072, 00:24:23.378 "max_queue_depth": 128, 00:24:23.378 "num_shared_buffers": 511, 00:24:23.378 "sock_priority": 0, 00:24:23.378 "trtype": "TCP", 00:24:23.378 "zcopy": false 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "nvmf_create_subsystem", 00:24:23.378 "params": { 00:24:23.378 "allow_any_host": false, 00:24:23.378 "ana_reporting": false, 00:24:23.378 "max_cntlid": 65519, 00:24:23.378 "max_namespaces": 32, 00:24:23.378 "min_cntlid": 1, 00:24:23.378 "model_number": "SPDK bdev Controller", 00:24:23.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.378 "serial_number": "00000000000000000000" 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "nvmf_subsystem_add_host", 00:24:23.378 "params": { 00:24:23.378 "host": "nqn.2016-06.io.spdk:host1", 00:24:23.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.378 "psk": "key0" 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "nvmf_subsystem_add_ns", 00:24:23.378 "params": { 00:24:23.378 "namespace": { 00:24:23.378 "bdev_name": "malloc0", 00:24:23.378 "nguid": "6F70BC650D4046288E0BF55F03368BB3", 00:24:23.378 "no_auto_visible": false, 00:24:23.378 "nsid": 1, 00:24:23.378 "uuid": "6f70bc65-0d40-4628-8e0b-f55f03368bb3" 00:24:23.378 }, 00:24:23.378 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:24:23.378 } 00:24:23.378 }, 00:24:23.378 { 00:24:23.378 "method": "nvmf_subsystem_add_listener", 00:24:23.378 "params": { 00:24:23.378 "listen_address": { 00:24:23.378 "adrfam": "IPv4", 00:24:23.378 "traddr": "10.0.0.2", 00:24:23.378 "trsvcid": "4420", 00:24:23.378 "trtype": "TCP" 00:24:23.378 }, 00:24:23.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.378 "secure_channel": true 00:24:23.378 } 00:24:23.378 } 00:24:23.378 ] 00:24:23.378 } 00:24:23.378 ] 00:24:23.378 }' 00:24:23.378 13:34:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:23.378 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.378 13:34:40 -- nvmf/common.sh@470 -- # nvmfpid=78925 00:24:23.378 13:34:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:23.378 13:34:40 -- nvmf/common.sh@471 -- # waitforlisten 78925 00:24:23.378 13:34:40 -- common/autotest_common.sh@817 -- # '[' -z 78925 ']' 00:24:23.378 13:34:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.378 13:34:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:23.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.378 13:34:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.378 13:34:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:23.378 13:34:40 -- common/autotest_common.sh@10 -- # set +x 00:24:23.378 [2024-04-26 13:34:40.768156] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:23.378 [2024-04-26 13:34:40.768261] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.637 [2024-04-26 13:34:40.903404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.637 [2024-04-26 13:34:41.017948] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.637 [2024-04-26 13:34:41.018012] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.637 [2024-04-26 13:34:41.018041] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:23.637 [2024-04-26 13:34:41.018050] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:23.637 [2024-04-26 13:34:41.018057] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.637 [2024-04-26 13:34:41.018182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.909 [2024-04-26 13:34:41.251637] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.909 [2024-04-26 13:34:41.283578] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:23.909 [2024-04-26 13:34:41.283818] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.478 13:34:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:24.478 13:34:41 -- common/autotest_common.sh@850 -- # return 0 00:24:24.478 13:34:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:24.478 13:34:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:24.478 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.479 13:34:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.479 13:34:41 -- target/tls.sh@272 -- # bdevperf_pid=78969 00:24:24.479 13:34:41 -- target/tls.sh@273 -- # waitforlisten 78969 /var/tmp/bdevperf.sock 00:24:24.479 13:34:41 -- common/autotest_common.sh@817 -- # '[' -z 78969 ']' 00:24:24.479 13:34:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.479 13:34:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:24.479 13:34:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.479 13:34:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:24.479 13:34:41 -- common/autotest_common.sh@10 -- # set +x 00:24:24.479 13:34:41 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:24.479 13:34:41 -- target/tls.sh@270 -- # echo '{ 00:24:24.479 "subsystems": [ 00:24:24.479 { 00:24:24.479 "subsystem": "keyring", 00:24:24.479 "config": [ 00:24:24.479 { 00:24:24.479 "method": "keyring_file_add_key", 00:24:24.479 "params": { 00:24:24.479 "name": "key0", 00:24:24.479 "path": "/tmp/tmp.F9heb1DsHD" 00:24:24.479 } 00:24:24.479 } 00:24:24.479 ] 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "subsystem": "iobuf", 00:24:24.479 "config": [ 00:24:24.479 { 00:24:24.479 "method": "iobuf_set_options", 00:24:24.479 "params": { 00:24:24.479 "large_bufsize": 135168, 00:24:24.479 "large_pool_count": 1024, 00:24:24.479 "small_bufsize": 8192, 00:24:24.479 "small_pool_count": 8192 00:24:24.479 } 00:24:24.479 } 00:24:24.479 ] 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "subsystem": "sock", 00:24:24.479 "config": [ 00:24:24.479 { 00:24:24.479 "method": "sock_impl_set_options", 00:24:24.479 "params": { 00:24:24.479 "enable_ktls": false, 00:24:24.479 "enable_placement_id": 0, 00:24:24.479 "enable_quickack": false, 00:24:24.479 "enable_recv_pipe": true, 00:24:24.479 "enable_zerocopy_send_client": false, 00:24:24.479 "enable_zerocopy_send_server": true, 00:24:24.479 "impl_name": "posix", 00:24:24.479 "recv_buf_size": 2097152, 00:24:24.479 "send_buf_size": 2097152, 00:24:24.479 "tls_version": 0, 00:24:24.479 "zerocopy_threshold": 0 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "sock_impl_set_options", 00:24:24.479 "params": { 00:24:24.479 "enable_ktls": false, 00:24:24.479 "enable_placement_id": 0, 00:24:24.479 "enable_quickack": false, 00:24:24.479 "enable_recv_pipe": true, 00:24:24.479 "enable_zerocopy_send_client": false, 00:24:24.479 "enable_zerocopy_send_server": true, 00:24:24.479 "impl_name": "ssl", 00:24:24.479 "recv_buf_size": 4096, 00:24:24.479 "send_buf_size": 4096, 00:24:24.479 "tls_version": 0, 00:24:24.479 "zerocopy_threshold": 0 00:24:24.479 } 00:24:24.479 } 00:24:24.479 ] 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "subsystem": "vmd", 00:24:24.479 "config": [] 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "subsystem": "accel", 00:24:24.479 "config": [ 00:24:24.479 { 00:24:24.479 "method": "accel_set_options", 00:24:24.479 "params": { 00:24:24.479 "buf_count": 2048, 00:24:24.479 "large_cache_size": 16, 00:24:24.479 "sequence_count": 2048, 00:24:24.479 "small_cache_size": 128, 00:24:24.479 "task_count": 2048 00:24:24.479 } 00:24:24.479 } 00:24:24.479 ] 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "subsystem": "bdev", 00:24:24.479 "config": [ 00:24:24.479 { 00:24:24.479 "method": "bdev_set_options", 00:24:24.479 "params": { 00:24:24.479 "bdev_auto_examine": true, 00:24:24.479 "bdev_io_cache_size": 256, 00:24:24.479 "bdev_io_pool_size": 65535, 00:24:24.479 "iobuf_large_cache_size": 16, 00:24:24.479 "iobuf_small_cache_size": 128 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "bdev_raid_set_options", 00:24:24.479 "params": { 00:24:24.479 "process_window_size_kb": 1024 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "bdev_iscsi_set_options", 00:24:24.479 "params": { 00:24:24.479 "timeout_sec": 30 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "bdev_nvme_set_options", 00:24:24.479 "params": { 00:24:24.479 "action_on_timeout": "none", 00:24:24.479 "allow_accel_sequence": false, 00:24:24.479 "arbitration_burst": 0, 00:24:24.479 "bdev_retry_count": 3, 00:24:24.479 "ctrlr_loss_timeout_sec": 0, 00:24:24.479 "delay_cmd_submit": true, 00:24:24.479 "dhchap_dhgroups": [ 00:24:24.479 "null", 00:24:24.479 "ffdhe2048", 00:24:24.479 "ffdhe3072", 00:24:24.479 "ffdhe4096", 00:24:24.479 "ffdhe6144", 00:24:24.479 "ffdhe8192" 00:24:24.479 ], 00:24:24.479 "dhchap_digests": [ 00:24:24.479 "sha256", 00:24:24.479 "sha384", 00:24:24.479 "sha512" 00:24:24.479 ], 00:24:24.479 "disable_auto_failback": false, 00:24:24.479 "fast_io_fail_timeout_sec": 0, 00:24:24.479 "generate_uuids": false, 00:24:24.479 "high_priority_weight": 0, 00:24:24.479 "io_path_stat": false, 00:24:24.479 "io_queue_requests": 512, 00:24:24.479 "keep_alive_timeout_ms": 10000, 00:24:24.479 "low_priority_weight": 0, 00:24:24.479 "medium_priority_weight": 0, 00:24:24.479 "nvme_adminq_poll_period_us": 10000, 00:24:24.479 "nvme_error_stat": false, 00:24:24.479 "nvme_ioq_poll_period_us": 0, 00:24:24.479 "rdma_cm_event_timeout_ms": 0, 00:24:24.479 "rdma_max_cq_size": 0, 00:24:24.479 "rdma_srq_size": 0, 00:24:24.479 "reconnect_delay_sec": 0, 00:24:24.479 "timeout_admin_us": 0, 00:24:24.479 "timeout_us": 0, 00:24:24.479 "transport_ack_timeout": 0, 00:24:24.479 "transport_retry_count": 4, 00:24:24.479 "transport_tos": 0 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "bdev_nvme_attach_controller", 00:24:24.479 "params": { 00:24:24.479 "adrfam": "IPv4", 00:24:24.479 "ctrlr_loss_timeout_sec": 0, 00:24:24.479 "ddgst": false, 00:24:24.479 "fast_io_fail_timeout_sec": 0, 00:24:24.479 "hdgst": false, 00:24:24.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:24.479 "name": "nvme0", 00:24:24.479 "prchk_guard": false, 00:24:24.479 "prchk_reftag": false, 00:24:24.479 "psk": "key0", 00:24:24.479 "reconnect_delay_sec": 0, 00:24:24.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.479 "traddr": "10.0.0.2", 00:24:24.479 "trsvcid": "4420", 00:24:24.479 "trtype": "TCP" 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "bdev_nvme_set_hotplug", 00:24:24.479 "params": { 00:24:24.479 "enable": false, 00:24:24.479 "period_us": 100000 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "bdev_enable_histogram", 00:24:24.479 "params": { 00:24:24.479 "enable": true, 00:24:24.479 "name": "nvme0n1" 00:24:24.479 } 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "method": "bdev_wait_for_examine" 00:24:24.479 } 00:24:24.479 ] 00:24:24.479 }, 00:24:24.479 { 00:24:24.479 "subsystem": "nbd", 00:24:24.479 "config": [] 00:24:24.479 } 00:24:24.479 ] 00:24:24.479 }' 00:24:24.479 [2024-04-26 13:34:41.894673] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:24.479 [2024-04-26 13:34:41.894835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78969 ] 00:24:24.738 [2024-04-26 13:34:42.038450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.738 [2024-04-26 13:34:42.163754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.998 [2024-04-26 13:34:42.335375] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:25.564 13:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:25.564 13:34:42 -- common/autotest_common.sh@850 -- # return 0 00:24:25.564 13:34:42 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.564 13:34:42 -- target/tls.sh@275 -- # jq -r '.[].name' 00:24:25.821 13:34:43 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.821 13:34:43 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:26.079 Running I/O for 1 seconds... 00:24:27.014 00:24:27.014 Latency(us) 00:24:27.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.014 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:27.014 Verification LBA range: start 0x0 length 0x2000 00:24:27.014 nvme0n1 : 1.03 3713.98 14.51 0.00 0.00 34059.58 7536.64 20614.05 00:24:27.015 =================================================================================================================== 00:24:27.015 Total : 3713.98 14.51 0.00 0.00 34059.58 7536.64 20614.05 00:24:27.015 0 00:24:27.015 13:34:44 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:24:27.015 13:34:44 -- target/tls.sh@279 -- # cleanup 00:24:27.015 13:34:44 -- target/tls.sh@15 -- # process_shm --id 0 00:24:27.015 13:34:44 -- common/autotest_common.sh@794 -- # type=--id 00:24:27.015 13:34:44 -- common/autotest_common.sh@795 -- # id=0 00:24:27.015 13:34:44 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:24:27.015 13:34:44 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:27.015 13:34:44 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:24:27.015 13:34:44 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:24:27.015 13:34:44 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:24:27.015 13:34:44 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:27.015 nvmf_trace.0 00:24:27.273 13:34:44 -- common/autotest_common.sh@809 -- # return 0 00:24:27.273 13:34:44 -- target/tls.sh@16 -- # killprocess 78969 00:24:27.273 13:34:44 -- common/autotest_common.sh@936 -- # '[' -z 78969 ']' 00:24:27.273 13:34:44 -- common/autotest_common.sh@940 -- # kill -0 78969 00:24:27.273 13:34:44 -- common/autotest_common.sh@941 -- # uname 00:24:27.273 13:34:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:27.273 13:34:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78969 00:24:27.273 13:34:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:27.273 13:34:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:27.273 killing process with pid 78969 00:24:27.273 13:34:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78969' 00:24:27.273 13:34:44 -- common/autotest_common.sh@955 -- # kill 78969 00:24:27.273 Received shutdown signal, test time was about 1.000000 seconds 00:24:27.273 00:24:27.273 Latency(us) 00:24:27.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.273 =================================================================================================================== 00:24:27.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.273 13:34:44 -- common/autotest_common.sh@960 -- # wait 78969 00:24:27.531 13:34:44 -- target/tls.sh@17 -- # nvmftestfini 00:24:27.531 13:34:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:27.531 13:34:44 -- nvmf/common.sh@117 -- # sync 00:24:27.531 13:34:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.531 13:34:44 -- nvmf/common.sh@120 -- # set +e 00:24:27.531 13:34:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.531 13:34:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.531 rmmod nvme_tcp 00:24:27.531 rmmod nvme_fabrics 00:24:27.531 rmmod nvme_keyring 00:24:27.531 13:34:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.531 13:34:44 -- nvmf/common.sh@124 -- # set -e 00:24:27.531 13:34:44 -- nvmf/common.sh@125 -- # return 0 00:24:27.531 13:34:44 -- nvmf/common.sh@478 -- # '[' -n 78925 ']' 00:24:27.531 13:34:44 -- nvmf/common.sh@479 -- # killprocess 78925 00:24:27.531 13:34:44 -- common/autotest_common.sh@936 -- # '[' -z 78925 ']' 00:24:27.531 13:34:44 -- common/autotest_common.sh@940 -- # kill -0 78925 00:24:27.531 13:34:44 -- common/autotest_common.sh@941 -- # uname 00:24:27.531 13:34:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:27.531 13:34:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78925 00:24:27.531 13:34:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:27.531 13:34:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:27.532 13:34:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78925' 00:24:27.532 killing process with pid 78925 00:24:27.532 13:34:44 -- common/autotest_common.sh@955 -- # kill 78925 00:24:27.532 13:34:44 -- common/autotest_common.sh@960 -- # wait 78925 00:24:27.790 13:34:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:27.790 13:34:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:27.790 13:34:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:27.790 13:34:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.790 13:34:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.790 13:34:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.790 13:34:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.790 13:34:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.790 13:34:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:27.790 13:34:45 -- target/tls.sh@18 -- # rm -f /tmp/tmp.GIq8NQC27v /tmp/tmp.QgdfYHMiHH /tmp/tmp.F9heb1DsHD 00:24:27.790 00:24:27.790 real 1m29.361s 00:24:27.790 user 2m22.432s 00:24:27.790 sys 0m28.708s 00:24:27.790 13:34:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:27.790 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:27.790 ************************************ 00:24:27.790 END TEST nvmf_tls 00:24:27.790 ************************************ 00:24:28.064 13:34:45 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:28.064 13:34:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:28.064 13:34:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:28.064 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.064 ************************************ 00:24:28.064 START TEST nvmf_fips 00:24:28.064 ************************************ 00:24:28.064 13:34:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:28.064 * Looking for test storage... 00:24:28.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:24:28.064 13:34:45 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.064 13:34:45 -- nvmf/common.sh@7 -- # uname -s 00:24:28.064 13:34:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.064 13:34:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.064 13:34:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.064 13:34:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.064 13:34:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.064 13:34:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.064 13:34:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.064 13:34:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.064 13:34:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.064 13:34:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.064 13:34:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:28.064 13:34:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:28.064 13:34:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.064 13:34:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.064 13:34:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.064 13:34:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.064 13:34:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.064 13:34:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.064 13:34:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.064 13:34:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.064 13:34:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.064 13:34:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.064 13:34:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.064 13:34:45 -- paths/export.sh@5 -- # export PATH 00:24:28.064 13:34:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.064 13:34:45 -- nvmf/common.sh@47 -- # : 0 00:24:28.064 13:34:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.064 13:34:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.064 13:34:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.064 13:34:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.064 13:34:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.064 13:34:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.064 13:34:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.064 13:34:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.064 13:34:45 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:28.064 13:34:45 -- fips/fips.sh@89 -- # check_openssl_version 00:24:28.064 13:34:45 -- fips/fips.sh@83 -- # local target=3.0.0 00:24:28.064 13:34:45 -- fips/fips.sh@85 -- # awk '{print $2}' 00:24:28.064 13:34:45 -- fips/fips.sh@85 -- # openssl version 00:24:28.064 13:34:45 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:24:28.064 13:34:45 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:24:28.064 13:34:45 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:24:28.064 13:34:45 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:24:28.064 13:34:45 -- scripts/common.sh@333 -- # IFS=.-: 00:24:28.064 13:34:45 -- scripts/common.sh@333 -- # read -ra ver1 00:24:28.064 13:34:45 -- scripts/common.sh@334 -- # IFS=.-: 00:24:28.064 13:34:45 -- scripts/common.sh@334 -- # read -ra ver2 00:24:28.064 13:34:45 -- scripts/common.sh@335 -- # local 'op=>=' 00:24:28.064 13:34:45 -- scripts/common.sh@337 -- # ver1_l=3 00:24:28.064 13:34:45 -- scripts/common.sh@338 -- # ver2_l=3 00:24:28.064 13:34:45 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:24:28.064 13:34:45 -- scripts/common.sh@341 -- # case "$op" in 00:24:28.064 13:34:45 -- scripts/common.sh@345 -- # : 1 00:24:28.064 13:34:45 -- scripts/common.sh@361 -- # (( v = 0 )) 00:24:28.064 13:34:45 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.064 13:34:45 -- scripts/common.sh@362 -- # decimal 3 00:24:28.064 13:34:45 -- scripts/common.sh@350 -- # local d=3 00:24:28.064 13:34:45 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:28.064 13:34:45 -- scripts/common.sh@352 -- # echo 3 00:24:28.064 13:34:45 -- scripts/common.sh@362 -- # ver1[v]=3 00:24:28.064 13:34:45 -- scripts/common.sh@363 -- # decimal 3 00:24:28.064 13:34:45 -- scripts/common.sh@350 -- # local d=3 00:24:28.064 13:34:45 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:28.064 13:34:45 -- scripts/common.sh@352 -- # echo 3 00:24:28.064 13:34:45 -- scripts/common.sh@363 -- # ver2[v]=3 00:24:28.064 13:34:45 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:28.064 13:34:45 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:28.064 13:34:45 -- scripts/common.sh@361 -- # (( v++ )) 00:24:28.064 13:34:45 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.064 13:34:45 -- scripts/common.sh@362 -- # decimal 0 00:24:28.064 13:34:45 -- scripts/common.sh@350 -- # local d=0 00:24:28.064 13:34:45 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:28.064 13:34:45 -- scripts/common.sh@352 -- # echo 0 00:24:28.064 13:34:45 -- scripts/common.sh@362 -- # ver1[v]=0 00:24:28.064 13:34:45 -- scripts/common.sh@363 -- # decimal 0 00:24:28.064 13:34:45 -- scripts/common.sh@350 -- # local d=0 00:24:28.064 13:34:45 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:28.064 13:34:45 -- scripts/common.sh@352 -- # echo 0 00:24:28.065 13:34:45 -- scripts/common.sh@363 -- # ver2[v]=0 00:24:28.065 13:34:45 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:28.065 13:34:45 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:24:28.065 13:34:45 -- scripts/common.sh@361 -- # (( v++ )) 00:24:28.065 13:34:45 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.065 13:34:45 -- scripts/common.sh@362 -- # decimal 9 00:24:28.065 13:34:45 -- scripts/common.sh@350 -- # local d=9 00:24:28.065 13:34:45 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:24:28.065 13:34:45 -- scripts/common.sh@352 -- # echo 9 00:24:28.065 13:34:45 -- scripts/common.sh@362 -- # ver1[v]=9 00:24:28.065 13:34:45 -- scripts/common.sh@363 -- # decimal 0 00:24:28.065 13:34:45 -- scripts/common.sh@350 -- # local d=0 00:24:28.065 13:34:45 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:28.065 13:34:45 -- scripts/common.sh@352 -- # echo 0 00:24:28.065 13:34:45 -- scripts/common.sh@363 -- # ver2[v]=0 00:24:28.065 13:34:45 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:24:28.065 13:34:45 -- scripts/common.sh@364 -- # return 0 00:24:28.065 13:34:45 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:24:28.065 13:34:45 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:28.065 13:34:45 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:24:28.065 13:34:45 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:28.065 13:34:45 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:28.065 13:34:45 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:24:28.065 13:34:45 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:24:28.065 13:34:45 -- fips/fips.sh@113 -- # build_openssl_config 00:24:28.065 13:34:45 -- fips/fips.sh@37 -- # cat 00:24:28.323 13:34:45 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:24:28.323 13:34:45 -- fips/fips.sh@58 -- # cat - 00:24:28.323 13:34:45 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:28.323 13:34:45 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:24:28.323 13:34:45 -- fips/fips.sh@116 -- # mapfile -t providers 00:24:28.323 13:34:45 -- fips/fips.sh@116 -- # grep name 00:24:28.323 13:34:45 -- fips/fips.sh@116 -- # openssl list -providers 00:24:28.323 13:34:45 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:24:28.323 13:34:45 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:24:28.323 13:34:45 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:28.323 13:34:45 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:24:28.323 13:34:45 -- common/autotest_common.sh@638 -- # local es=0 00:24:28.323 13:34:45 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:28.323 13:34:45 -- fips/fips.sh@127 -- # : 00:24:28.323 13:34:45 -- common/autotest_common.sh@626 -- # local arg=openssl 00:24:28.323 13:34:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:28.323 13:34:45 -- common/autotest_common.sh@630 -- # type -t openssl 00:24:28.323 13:34:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:28.323 13:34:45 -- common/autotest_common.sh@632 -- # type -P openssl 00:24:28.323 13:34:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:28.323 13:34:45 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:24:28.323 13:34:45 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:24:28.323 13:34:45 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:24:28.323 Error setting digest 00:24:28.323 0072CE23307F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:24:28.323 0072CE23307F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:24:28.323 13:34:45 -- common/autotest_common.sh@641 -- # es=1 00:24:28.323 13:34:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:28.323 13:34:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:28.323 13:34:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:28.323 13:34:45 -- fips/fips.sh@130 -- # nvmftestinit 00:24:28.323 13:34:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:28.323 13:34:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.323 13:34:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:28.323 13:34:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:28.323 13:34:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:28.323 13:34:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.323 13:34:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.323 13:34:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.323 13:34:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:28.323 13:34:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:28.323 13:34:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:28.323 13:34:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:28.323 13:34:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:28.323 13:34:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:28.323 13:34:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.323 13:34:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.323 13:34:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:28.323 13:34:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:28.324 13:34:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.324 13:34:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.324 13:34:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.324 13:34:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.324 13:34:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.324 13:34:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.324 13:34:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.324 13:34:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.324 13:34:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:28.324 13:34:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:28.324 Cannot find device "nvmf_tgt_br" 00:24:28.324 13:34:45 -- nvmf/common.sh@155 -- # true 00:24:28.324 13:34:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.324 Cannot find device "nvmf_tgt_br2" 00:24:28.324 13:34:45 -- nvmf/common.sh@156 -- # true 00:24:28.324 13:34:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:28.324 13:34:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:28.324 Cannot find device "nvmf_tgt_br" 00:24:28.324 13:34:45 -- nvmf/common.sh@158 -- # true 00:24:28.324 13:34:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:28.324 Cannot find device "nvmf_tgt_br2" 00:24:28.324 13:34:45 -- nvmf/common.sh@159 -- # true 00:24:28.324 13:34:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:28.324 13:34:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:28.324 13:34:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.324 13:34:45 -- nvmf/common.sh@162 -- # true 00:24:28.324 13:34:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.324 13:34:45 -- nvmf/common.sh@163 -- # true 00:24:28.324 13:34:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.324 13:34:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.324 13:34:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.324 13:34:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.324 13:34:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.582 13:34:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.582 13:34:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.582 13:34:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:28.582 13:34:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:28.582 13:34:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:28.582 13:34:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:28.582 13:34:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:28.582 13:34:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:28.582 13:34:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.582 13:34:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.582 13:34:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.582 13:34:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:28.582 13:34:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:28.582 13:34:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.582 13:34:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.582 13:34:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.582 13:34:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.582 13:34:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.582 13:34:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:28.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:24:28.582 00:24:28.582 --- 10.0.0.2 ping statistics --- 00:24:28.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.582 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:24:28.582 13:34:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:28.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:24:28.582 00:24:28.582 --- 10.0.0.3 ping statistics --- 00:24:28.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.582 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:28.582 13:34:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:24:28.582 00:24:28.582 --- 10.0.0.1 ping statistics --- 00:24:28.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.582 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:24:28.582 13:34:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.582 13:34:45 -- nvmf/common.sh@422 -- # return 0 00:24:28.582 13:34:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:28.582 13:34:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.582 13:34:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:28.582 13:34:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:28.582 13:34:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.582 13:34:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:28.582 13:34:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:28.582 13:34:45 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:24:28.582 13:34:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:28.582 13:34:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:28.582 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 13:34:45 -- nvmf/common.sh@470 -- # nvmfpid=79258 00:24:28.582 13:34:45 -- nvmf/common.sh@471 -- # waitforlisten 79258 00:24:28.582 13:34:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:28.582 13:34:45 -- common/autotest_common.sh@817 -- # '[' -z 79258 ']' 00:24:28.582 13:34:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.582 13:34:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:28.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.582 13:34:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.582 13:34:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:28.582 13:34:45 -- common/autotest_common.sh@10 -- # set +x 00:24:28.841 [2024-04-26 13:34:46.068187] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:28.841 [2024-04-26 13:34:46.068303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.841 [2024-04-26 13:34:46.212118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.100 [2024-04-26 13:34:46.343178] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.100 [2024-04-26 13:34:46.343246] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.100 [2024-04-26 13:34:46.343282] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.100 [2024-04-26 13:34:46.343299] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.100 [2024-04-26 13:34:46.343312] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.100 [2024-04-26 13:34:46.343378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.665 13:34:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:29.665 13:34:47 -- common/autotest_common.sh@850 -- # return 0 00:24:29.665 13:34:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:29.665 13:34:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:29.665 13:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:29.665 13:34:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.665 13:34:47 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:24:29.665 13:34:47 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:29.665 13:34:47 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:29.665 13:34:47 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:29.665 13:34:47 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:29.665 13:34:47 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:29.665 13:34:47 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:29.665 13:34:47 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:29.923 [2024-04-26 13:34:47.366127] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.181 [2024-04-26 13:34:47.382048] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:30.181 [2024-04-26 13:34:47.382276] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.181 [2024-04-26 13:34:47.413510] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:30.181 malloc0 00:24:30.181 13:34:47 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:30.181 13:34:47 -- fips/fips.sh@147 -- # bdevperf_pid=79321 00:24:30.181 13:34:47 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:30.181 13:34:47 -- fips/fips.sh@148 -- # waitforlisten 79321 /var/tmp/bdevperf.sock 00:24:30.181 13:34:47 -- common/autotest_common.sh@817 -- # '[' -z 79321 ']' 00:24:30.181 13:34:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.181 13:34:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.181 13:34:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.181 13:34:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.181 13:34:47 -- common/autotest_common.sh@10 -- # set +x 00:24:30.181 [2024-04-26 13:34:47.526877] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:30.181 [2024-04-26 13:34:47.527020] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79321 ] 00:24:30.439 [2024-04-26 13:34:47.666944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.439 [2024-04-26 13:34:47.788072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.414 13:34:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:31.414 13:34:48 -- common/autotest_common.sh@850 -- # return 0 00:24:31.414 13:34:48 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:31.414 [2024-04-26 13:34:48.844028] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:31.414 [2024-04-26 13:34:48.844165] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:31.671 TLSTESTn1 00:24:31.671 13:34:48 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:31.671 Running I/O for 10 seconds... 00:24:43.862 00:24:43.862 Latency(us) 00:24:43.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.862 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:43.862 Verification LBA range: start 0x0 length 0x2000 00:24:43.862 TLSTESTn1 : 10.02 3445.50 13.46 0.00 0.00 37076.45 8043.05 33602.09 00:24:43.862 =================================================================================================================== 00:24:43.862 Total : 3445.50 13.46 0.00 0.00 37076.45 8043.05 33602.09 00:24:43.862 0 00:24:43.862 13:34:59 -- fips/fips.sh@1 -- # cleanup 00:24:43.862 13:34:59 -- fips/fips.sh@15 -- # process_shm --id 0 00:24:43.862 13:34:59 -- common/autotest_common.sh@794 -- # type=--id 00:24:43.862 13:34:59 -- common/autotest_common.sh@795 -- # id=0 00:24:43.862 13:34:59 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:24:43.862 13:34:59 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:43.862 13:34:59 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:24:43.862 13:34:59 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:24:43.862 13:34:59 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:24:43.862 13:34:59 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:43.862 nvmf_trace.0 00:24:43.862 13:34:59 -- common/autotest_common.sh@809 -- # return 0 00:24:43.862 13:34:59 -- fips/fips.sh@16 -- # killprocess 79321 00:24:43.862 13:34:59 -- common/autotest_common.sh@936 -- # '[' -z 79321 ']' 00:24:43.862 13:34:59 -- common/autotest_common.sh@940 -- # kill -0 79321 00:24:43.862 13:34:59 -- common/autotest_common.sh@941 -- # uname 00:24:43.862 13:34:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.862 13:34:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79321 00:24:43.862 13:34:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:24:43.862 killing process with pid 79321 00:24:43.862 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.863 00:24:43.863 Latency(us) 00:24:43.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.863 =================================================================================================================== 00:24:43.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.863 13:34:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:24:43.863 13:34:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79321' 00:24:43.863 13:34:59 -- common/autotest_common.sh@955 -- # kill 79321 00:24:43.863 [2024-04-26 13:34:59.216268] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:43.863 13:34:59 -- common/autotest_common.sh@960 -- # wait 79321 00:24:43.863 13:34:59 -- fips/fips.sh@17 -- # nvmftestfini 00:24:43.863 13:34:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:43.863 13:34:59 -- nvmf/common.sh@117 -- # sync 00:24:43.863 13:35:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:43.863 13:35:00 -- nvmf/common.sh@120 -- # set +e 00:24:43.863 13:35:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:43.863 13:35:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:43.863 rmmod nvme_tcp 00:24:43.863 rmmod nvme_fabrics 00:24:43.863 rmmod nvme_keyring 00:24:43.863 13:35:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:43.863 13:35:00 -- nvmf/common.sh@124 -- # set -e 00:24:43.863 13:35:00 -- nvmf/common.sh@125 -- # return 0 00:24:43.863 13:35:00 -- nvmf/common.sh@478 -- # '[' -n 79258 ']' 00:24:43.863 13:35:00 -- nvmf/common.sh@479 -- # killprocess 79258 00:24:43.863 13:35:00 -- common/autotest_common.sh@936 -- # '[' -z 79258 ']' 00:24:43.863 13:35:00 -- common/autotest_common.sh@940 -- # kill -0 79258 00:24:43.863 13:35:00 -- common/autotest_common.sh@941 -- # uname 00:24:43.863 13:35:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:43.863 13:35:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79258 00:24:43.863 13:35:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:43.863 killing process with pid 79258 00:24:43.863 13:35:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:43.863 13:35:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79258' 00:24:43.863 13:35:00 -- common/autotest_common.sh@955 -- # kill 79258 00:24:43.863 [2024-04-26 13:35:00.615714] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:43.863 13:35:00 -- common/autotest_common.sh@960 -- # wait 79258 00:24:43.863 13:35:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:43.863 13:35:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:43.863 13:35:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:43.863 13:35:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.863 13:35:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.863 13:35:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.863 13:35:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.863 13:35:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.863 13:35:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:43.863 13:35:00 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:24:43.863 00:24:43.863 real 0m15.637s 00:24:43.863 user 0m21.127s 00:24:43.863 sys 0m5.765s 00:24:43.863 13:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:43.863 13:35:00 -- common/autotest_common.sh@10 -- # set +x 00:24:43.863 ************************************ 00:24:43.863 END TEST nvmf_fips 00:24:43.863 ************************************ 00:24:43.863 13:35:01 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:24:43.863 13:35:01 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:24:43.863 13:35:01 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:24:43.863 13:35:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:43.863 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:43.863 13:35:01 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:24:43.863 13:35:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:43.863 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:43.863 13:35:01 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:24:43.863 13:35:01 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:43.863 13:35:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:43.863 13:35:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:43.863 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:43.863 ************************************ 00:24:43.863 START TEST nvmf_multicontroller 00:24:43.863 ************************************ 00:24:43.863 13:35:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:43.863 * Looking for test storage... 00:24:43.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:43.863 13:35:01 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:43.863 13:35:01 -- nvmf/common.sh@7 -- # uname -s 00:24:43.863 13:35:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.863 13:35:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.863 13:35:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.863 13:35:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.863 13:35:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.863 13:35:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.863 13:35:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.863 13:35:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.863 13:35:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.863 13:35:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.863 13:35:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:43.863 13:35:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:43.863 13:35:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.863 13:35:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.863 13:35:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:43.863 13:35:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.863 13:35:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:43.863 13:35:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.863 13:35:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.863 13:35:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.863 13:35:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.863 13:35:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.863 13:35:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.863 13:35:01 -- paths/export.sh@5 -- # export PATH 00:24:43.863 13:35:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.863 13:35:01 -- nvmf/common.sh@47 -- # : 0 00:24:43.863 13:35:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.863 13:35:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.863 13:35:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.863 13:35:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.863 13:35:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.863 13:35:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.863 13:35:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.863 13:35:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.863 13:35:01 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:43.863 13:35:01 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:43.863 13:35:01 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:43.863 13:35:01 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:43.863 13:35:01 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.863 13:35:01 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:43.863 13:35:01 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:43.863 13:35:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:43.863 13:35:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.863 13:35:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:43.863 13:35:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:43.863 13:35:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:43.863 13:35:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.863 13:35:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.863 13:35:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.863 13:35:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:43.863 13:35:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:43.863 13:35:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:43.864 13:35:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:43.864 13:35:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:43.864 13:35:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:43.864 13:35:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.864 13:35:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.864 13:35:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:43.864 13:35:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:43.864 13:35:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:43.864 13:35:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:43.864 13:35:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:43.864 13:35:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.864 13:35:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:43.864 13:35:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:43.864 13:35:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:43.864 13:35:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:43.864 13:35:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:43.864 13:35:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:43.864 Cannot find device "nvmf_tgt_br" 00:24:43.864 13:35:01 -- nvmf/common.sh@155 -- # true 00:24:43.864 13:35:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:43.864 Cannot find device "nvmf_tgt_br2" 00:24:43.864 13:35:01 -- nvmf/common.sh@156 -- # true 00:24:43.864 13:35:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:43.864 13:35:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:43.864 Cannot find device "nvmf_tgt_br" 00:24:43.864 13:35:01 -- nvmf/common.sh@158 -- # true 00:24:43.864 13:35:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:43.864 Cannot find device "nvmf_tgt_br2" 00:24:43.864 13:35:01 -- nvmf/common.sh@159 -- # true 00:24:43.864 13:35:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:44.122 13:35:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:44.122 13:35:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:44.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.122 13:35:01 -- nvmf/common.sh@162 -- # true 00:24:44.122 13:35:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:44.122 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.122 13:35:01 -- nvmf/common.sh@163 -- # true 00:24:44.122 13:35:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:44.122 13:35:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:44.122 13:35:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:44.122 13:35:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:44.122 13:35:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:44.122 13:35:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:44.122 13:35:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:44.122 13:35:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:44.122 13:35:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:44.122 13:35:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:44.122 13:35:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:44.122 13:35:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:44.122 13:35:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:44.122 13:35:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:44.122 13:35:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:44.122 13:35:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:44.122 13:35:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:44.122 13:35:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:44.122 13:35:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:44.122 13:35:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:44.122 13:35:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:44.123 13:35:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:44.123 13:35:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:44.123 13:35:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:44.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:24:44.123 00:24:44.123 --- 10.0.0.2 ping statistics --- 00:24:44.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.123 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:24:44.123 13:35:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:44.123 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:44.123 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:24:44.123 00:24:44.123 --- 10.0.0.3 ping statistics --- 00:24:44.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.123 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:44.123 13:35:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:44.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:44.123 00:24:44.123 --- 10.0.0.1 ping statistics --- 00:24:44.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.123 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:44.123 13:35:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.123 13:35:01 -- nvmf/common.sh@422 -- # return 0 00:24:44.123 13:35:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:44.123 13:35:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.123 13:35:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:44.123 13:35:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:44.123 13:35:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.123 13:35:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:44.123 13:35:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:44.380 13:35:01 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:44.380 13:35:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:44.380 13:35:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:44.380 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:44.380 13:35:01 -- nvmf/common.sh@470 -- # nvmfpid=79700 00:24:44.380 13:35:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:44.380 13:35:01 -- nvmf/common.sh@471 -- # waitforlisten 79700 00:24:44.380 13:35:01 -- common/autotest_common.sh@817 -- # '[' -z 79700 ']' 00:24:44.380 13:35:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.380 13:35:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:44.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.380 13:35:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.380 13:35:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:44.380 13:35:01 -- common/autotest_common.sh@10 -- # set +x 00:24:44.380 [2024-04-26 13:35:01.667387] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:44.380 [2024-04-26 13:35:01.667507] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.380 [2024-04-26 13:35:01.812483] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.638 [2024-04-26 13:35:01.959521] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.638 [2024-04-26 13:35:01.959602] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.638 [2024-04-26 13:35:01.959621] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.638 [2024-04-26 13:35:01.959636] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.638 [2024-04-26 13:35:01.959649] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.638 [2024-04-26 13:35:01.959797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.638 [2024-04-26 13:35:01.960519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.638 [2024-04-26 13:35:01.960530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.571 13:35:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.571 13:35:02 -- common/autotest_common.sh@850 -- # return 0 00:24:45.571 13:35:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:45.571 13:35:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 13:35:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.571 13:35:02 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.571 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 [2024-04-26 13:35:02.773687] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.571 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.571 13:35:02 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:45.571 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 Malloc0 00:24:45.571 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.571 13:35:02 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.571 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.571 13:35:02 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.571 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.571 13:35:02 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.571 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 [2024-04-26 13:35:02.847473] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.571 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.571 13:35:02 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:45.571 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 [2024-04-26 13:35:02.855369] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:45.571 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.571 13:35:02 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:45.571 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.571 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.571 Malloc1 00:24:45.571 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.572 13:35:02 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:45.572 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.572 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.572 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.572 13:35:02 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:45.572 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.572 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.572 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.572 13:35:02 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:45.572 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.572 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.572 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.572 13:35:02 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:45.572 13:35:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.572 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:45.572 13:35:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.572 13:35:02 -- host/multicontroller.sh@44 -- # bdevperf_pid=79758 00:24:45.572 13:35:02 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:45.572 13:35:02 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.572 13:35:02 -- host/multicontroller.sh@47 -- # waitforlisten 79758 /var/tmp/bdevperf.sock 00:24:45.572 13:35:02 -- common/autotest_common.sh@817 -- # '[' -z 79758 ']' 00:24:45.572 13:35:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.572 13:35:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:45.572 13:35:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.572 13:35:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:45.572 13:35:02 -- common/autotest_common.sh@10 -- # set +x 00:24:46.981 13:35:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:46.981 13:35:04 -- common/autotest_common.sh@850 -- # return 0 00:24:46.981 13:35:04 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:46.981 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.981 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.981 NVMe0n1 00:24:46.981 13:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.981 13:35:04 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.981 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.981 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.981 13:35:04 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:46.981 13:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.981 1 00:24:46.981 13:35:04 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:46.981 13:35:04 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.981 13:35:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:46.981 13:35:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.981 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.981 13:35:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.981 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.981 13:35:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:46.981 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.981 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.981 2024/04/26 13:35:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:46.981 request: 00:24:46.981 { 00:24:46.981 "method": "bdev_nvme_attach_controller", 00:24:46.981 "params": { 00:24:46.981 "name": "NVMe0", 00:24:46.981 "trtype": "tcp", 00:24:46.981 "traddr": "10.0.0.2", 00:24:46.981 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:46.981 "hostaddr": "10.0.0.2", 00:24:46.981 "hostsvcid": "60000", 00:24:46.981 "adrfam": "ipv4", 00:24:46.981 "trsvcid": "4420", 00:24:46.981 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:24:46.981 } 00:24:46.981 } 00:24:46.981 Got JSON-RPC error response 00:24:46.981 GoRPCClient: error on JSON-RPC call 00:24:46.981 13:35:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.981 13:35:04 -- common/autotest_common.sh@641 -- # es=1 00:24:46.981 13:35:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.981 13:35:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.981 13:35:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.981 13:35:04 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:46.981 13:35:04 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.981 13:35:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:46.981 13:35:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.981 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.981 13:35:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.981 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.981 13:35:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:46.981 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.981 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.981 2024/04/26 13:35:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:46.981 request: 00:24:46.981 { 00:24:46.981 "method": "bdev_nvme_attach_controller", 00:24:46.981 "params": { 00:24:46.981 "name": "NVMe0", 00:24:46.981 "trtype": "tcp", 00:24:46.981 "traddr": "10.0.0.2", 00:24:46.981 "hostaddr": "10.0.0.2", 00:24:46.982 "hostsvcid": "60000", 00:24:46.982 "adrfam": "ipv4", 00:24:46.982 "trsvcid": "4420", 00:24:46.982 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:24:46.982 } 00:24:46.982 } 00:24:46.982 Got JSON-RPC error response 00:24:46.982 GoRPCClient: error on JSON-RPC call 00:24:46.982 13:35:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.982 13:35:04 -- common/autotest_common.sh@641 -- # es=1 00:24:46.982 13:35:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.982 13:35:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.982 13:35:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.982 13:35:04 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.982 13:35:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.982 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.982 13:35:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.982 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.982 13:35:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.982 2024/04/26 13:35:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:24:46.982 request: 00:24:46.982 { 00:24:46.982 "method": "bdev_nvme_attach_controller", 00:24:46.982 "params": { 00:24:46.982 "name": "NVMe0", 00:24:46.982 "trtype": "tcp", 00:24:46.982 "traddr": "10.0.0.2", 00:24:46.982 "hostaddr": "10.0.0.2", 00:24:46.982 "hostsvcid": "60000", 00:24:46.982 "adrfam": "ipv4", 00:24:46.982 "trsvcid": "4420", 00:24:46.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.982 "multipath": "disable" 00:24:46.982 } 00:24:46.982 } 00:24:46.982 Got JSON-RPC error response 00:24:46.982 GoRPCClient: error on JSON-RPC call 00:24:46.982 13:35:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.982 13:35:04 -- common/autotest_common.sh@641 -- # es=1 00:24:46.982 13:35:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.982 13:35:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.982 13:35:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.982 13:35:04 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:46.982 13:35:04 -- common/autotest_common.sh@638 -- # local es=0 00:24:46.982 13:35:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:46.982 13:35:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:46.982 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.982 13:35:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:46.982 13:35:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:46.982 13:35:04 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:46.982 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.982 2024/04/26 13:35:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:24:46.982 request: 00:24:46.982 { 00:24:46.982 "method": "bdev_nvme_attach_controller", 00:24:46.982 "params": { 00:24:46.982 "name": "NVMe0", 00:24:46.982 "trtype": "tcp", 00:24:46.982 "traddr": "10.0.0.2", 00:24:46.982 "hostaddr": "10.0.0.2", 00:24:46.982 "hostsvcid": "60000", 00:24:46.982 "adrfam": "ipv4", 00:24:46.982 "trsvcid": "4420", 00:24:46.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.982 "multipath": "failover" 00:24:46.982 } 00:24:46.982 } 00:24:46.982 Got JSON-RPC error response 00:24:46.982 GoRPCClient: error on JSON-RPC call 00:24:46.982 13:35:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:46.982 13:35:04 -- common/autotest_common.sh@641 -- # es=1 00:24:46.982 13:35:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:46.982 13:35:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:46.982 13:35:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:46.982 13:35:04 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.982 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.982 00:24:46.982 13:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.982 13:35:04 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.982 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.982 13:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.982 13:35:04 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:46.982 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.982 00:24:46.982 13:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.982 13:35:04 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.982 13:35:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.982 13:35:04 -- common/autotest_common.sh@10 -- # set +x 00:24:46.982 13:35:04 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:46.982 13:35:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.982 13:35:04 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:46.982 13:35:04 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.354 0 00:24:48.354 13:35:05 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:48.354 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.354 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:24:48.355 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.355 13:35:05 -- host/multicontroller.sh@100 -- # killprocess 79758 00:24:48.355 13:35:05 -- common/autotest_common.sh@936 -- # '[' -z 79758 ']' 00:24:48.355 13:35:05 -- common/autotest_common.sh@940 -- # kill -0 79758 00:24:48.355 13:35:05 -- common/autotest_common.sh@941 -- # uname 00:24:48.355 13:35:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.355 13:35:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79758 00:24:48.355 13:35:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:48.355 13:35:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:48.355 killing process with pid 79758 00:24:48.355 13:35:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79758' 00:24:48.355 13:35:05 -- common/autotest_common.sh@955 -- # kill 79758 00:24:48.355 13:35:05 -- common/autotest_common.sh@960 -- # wait 79758 00:24:48.355 13:35:05 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.355 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.355 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:24:48.355 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.355 13:35:05 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:48.355 13:35:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.355 13:35:05 -- common/autotest_common.sh@10 -- # set +x 00:24:48.355 13:35:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.355 13:35:05 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:48.355 13:35:05 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:48.355 13:35:05 -- common/autotest_common.sh@1598 -- # read -r file 00:24:48.355 13:35:05 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:24:48.355 13:35:05 -- common/autotest_common.sh@1597 -- # sort -u 00:24:48.355 13:35:05 -- common/autotest_common.sh@1599 -- # cat 00:24:48.355 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:48.355 [2024-04-26 13:35:02.989772] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:48.355 [2024-04-26 13:35:02.989936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79758 ] 00:24:48.355 [2024-04-26 13:35:03.129715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.355 [2024-04-26 13:35:03.242229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.355 [2024-04-26 13:35:04.323045] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 10de12f3-1fd1-4fae-a600-d3ab3579699a already exists 00:24:48.355 [2024-04-26 13:35:04.323149] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:10de12f3-1fd1-4fae-a600-d3ab3579699a alias for bdev NVMe1n1 00:24:48.355 [2024-04-26 13:35:04.323173] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:48.355 Running I/O for 1 seconds... 00:24:48.355 00:24:48.355 Latency(us) 00:24:48.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.355 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:48.355 NVMe0n1 : 1.00 18446.97 72.06 0.00 0.00 6927.68 2308.65 11975.21 00:24:48.355 =================================================================================================================== 00:24:48.355 Total : 18446.97 72.06 0.00 0.00 6927.68 2308.65 11975.21 00:24:48.355 Received shutdown signal, test time was about 1.000000 seconds 00:24:48.355 00:24:48.355 Latency(us) 00:24:48.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.355 =================================================================================================================== 00:24:48.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.355 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:24:48.355 13:35:05 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:48.355 13:35:05 -- common/autotest_common.sh@1598 -- # read -r file 00:24:48.355 13:35:05 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:48.355 13:35:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:48.355 13:35:05 -- nvmf/common.sh@117 -- # sync 00:24:48.612 13:35:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:48.612 13:35:05 -- nvmf/common.sh@120 -- # set +e 00:24:48.612 13:35:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:48.612 13:35:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:48.612 rmmod nvme_tcp 00:24:48.612 rmmod nvme_fabrics 00:24:48.612 rmmod nvme_keyring 00:24:48.612 13:35:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:48.612 13:35:05 -- nvmf/common.sh@124 -- # set -e 00:24:48.612 13:35:05 -- nvmf/common.sh@125 -- # return 0 00:24:48.612 13:35:05 -- nvmf/common.sh@478 -- # '[' -n 79700 ']' 00:24:48.612 13:35:05 -- nvmf/common.sh@479 -- # killprocess 79700 00:24:48.612 13:35:05 -- common/autotest_common.sh@936 -- # '[' -z 79700 ']' 00:24:48.612 13:35:05 -- common/autotest_common.sh@940 -- # kill -0 79700 00:24:48.612 13:35:05 -- common/autotest_common.sh@941 -- # uname 00:24:48.612 13:35:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:48.612 13:35:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79700 00:24:48.612 13:35:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:48.612 13:35:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:48.612 killing process with pid 79700 00:24:48.612 13:35:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79700' 00:24:48.612 13:35:05 -- common/autotest_common.sh@955 -- # kill 79700 00:24:48.612 13:35:05 -- common/autotest_common.sh@960 -- # wait 79700 00:24:48.870 13:35:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:48.870 13:35:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:48.870 13:35:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:48.870 13:35:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:48.870 13:35:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:48.870 13:35:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.870 13:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:48.870 13:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.870 13:35:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:48.870 00:24:48.870 real 0m5.136s 00:24:48.870 user 0m15.989s 00:24:48.870 sys 0m1.152s 00:24:48.870 13:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:48.870 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:48.870 ************************************ 00:24:48.870 END TEST nvmf_multicontroller 00:24:48.870 ************************************ 00:24:48.870 13:35:06 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:48.870 13:35:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:48.870 13:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:48.870 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.128 ************************************ 00:24:49.128 START TEST nvmf_aer 00:24:49.128 ************************************ 00:24:49.128 13:35:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:49.128 * Looking for test storage... 00:24:49.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:49.128 13:35:06 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:49.128 13:35:06 -- nvmf/common.sh@7 -- # uname -s 00:24:49.128 13:35:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.128 13:35:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.128 13:35:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.128 13:35:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.128 13:35:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.128 13:35:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.128 13:35:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.128 13:35:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.128 13:35:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.128 13:35:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.128 13:35:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:49.128 13:35:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:49.128 13:35:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.128 13:35:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.128 13:35:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:49.128 13:35:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.128 13:35:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:49.128 13:35:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.128 13:35:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.128 13:35:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.128 13:35:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.128 13:35:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.128 13:35:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.128 13:35:06 -- paths/export.sh@5 -- # export PATH 00:24:49.128 13:35:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.128 13:35:06 -- nvmf/common.sh@47 -- # : 0 00:24:49.129 13:35:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.129 13:35:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.129 13:35:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.129 13:35:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.129 13:35:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.129 13:35:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.129 13:35:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.129 13:35:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.129 13:35:06 -- host/aer.sh@11 -- # nvmftestinit 00:24:49.129 13:35:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:49.129 13:35:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.129 13:35:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:49.129 13:35:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:49.129 13:35:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:49.129 13:35:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.129 13:35:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.129 13:35:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.129 13:35:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:49.129 13:35:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:49.129 13:35:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:49.129 13:35:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:49.129 13:35:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:49.129 13:35:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:49.129 13:35:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.129 13:35:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.129 13:35:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:49.129 13:35:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:49.129 13:35:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:49.129 13:35:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:49.129 13:35:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:49.129 13:35:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.129 13:35:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:49.129 13:35:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:49.129 13:35:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:49.129 13:35:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:49.129 13:35:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:49.129 13:35:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:49.129 Cannot find device "nvmf_tgt_br" 00:24:49.129 13:35:06 -- nvmf/common.sh@155 -- # true 00:24:49.129 13:35:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:49.129 Cannot find device "nvmf_tgt_br2" 00:24:49.129 13:35:06 -- nvmf/common.sh@156 -- # true 00:24:49.129 13:35:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:49.129 13:35:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:49.129 Cannot find device "nvmf_tgt_br" 00:24:49.129 13:35:06 -- nvmf/common.sh@158 -- # true 00:24:49.129 13:35:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:49.129 Cannot find device "nvmf_tgt_br2" 00:24:49.129 13:35:06 -- nvmf/common.sh@159 -- # true 00:24:49.129 13:35:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:49.388 13:35:06 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:49.388 13:35:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:49.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:49.388 13:35:06 -- nvmf/common.sh@162 -- # true 00:24:49.388 13:35:06 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:49.388 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:49.388 13:35:06 -- nvmf/common.sh@163 -- # true 00:24:49.388 13:35:06 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:49.388 13:35:06 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:49.388 13:35:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:49.388 13:35:06 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:49.388 13:35:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:49.388 13:35:06 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:49.388 13:35:06 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:49.388 13:35:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:49.388 13:35:06 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:49.388 13:35:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:49.388 13:35:06 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:49.388 13:35:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:49.388 13:35:06 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:49.388 13:35:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:49.388 13:35:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:49.388 13:35:06 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:49.388 13:35:06 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:49.388 13:35:06 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:49.388 13:35:06 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:49.388 13:35:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:49.388 13:35:06 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:49.388 13:35:06 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:49.388 13:35:06 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:49.388 13:35:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:49.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:24:49.388 00:24:49.388 --- 10.0.0.2 ping statistics --- 00:24:49.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.388 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:24:49.388 13:35:06 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:49.388 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:49.388 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:24:49.388 00:24:49.388 --- 10.0.0.3 ping statistics --- 00:24:49.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.388 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:49.388 13:35:06 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:49.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:24:49.388 00:24:49.388 --- 10.0.0.1 ping statistics --- 00:24:49.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.388 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:49.388 13:35:06 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.388 13:35:06 -- nvmf/common.sh@422 -- # return 0 00:24:49.388 13:35:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:49.388 13:35:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.388 13:35:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:49.388 13:35:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:49.388 13:35:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.388 13:35:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:49.388 13:35:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:49.657 13:35:06 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:49.657 13:35:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:49.657 13:35:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:49.657 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.657 13:35:06 -- nvmf/common.sh@470 -- # nvmfpid=80013 00:24:49.657 13:35:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:49.657 13:35:06 -- nvmf/common.sh@471 -- # waitforlisten 80013 00:24:49.657 13:35:06 -- common/autotest_common.sh@817 -- # '[' -z 80013 ']' 00:24:49.657 13:35:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.657 13:35:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:49.657 13:35:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.657 13:35:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:49.657 13:35:06 -- common/autotest_common.sh@10 -- # set +x 00:24:49.657 [2024-04-26 13:35:06.909027] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:49.657 [2024-04-26 13:35:06.909134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.657 [2024-04-26 13:35:07.052342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.915 [2024-04-26 13:35:07.186648] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.915 [2024-04-26 13:35:07.186730] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.915 [2024-04-26 13:35:07.186745] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.915 [2024-04-26 13:35:07.186764] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.915 [2024-04-26 13:35:07.186773] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.915 [2024-04-26 13:35:07.187125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.915 [2024-04-26 13:35:07.187265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.916 [2024-04-26 13:35:07.187362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.916 [2024-04-26 13:35:07.187368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.482 13:35:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:50.482 13:35:07 -- common/autotest_common.sh@850 -- # return 0 00:24:50.482 13:35:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:50.482 13:35:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:50.482 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 13:35:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:50.741 13:35:07 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.741 13:35:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.741 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 [2024-04-26 13:35:07.947656] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.741 13:35:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.741 13:35:07 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:50.741 13:35:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.741 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 Malloc0 00:24:50.741 13:35:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.741 13:35:07 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:50.741 13:35:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.741 13:35:07 -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.741 13:35:08 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:50.741 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.741 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.741 13:35:08 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.741 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.741 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 [2024-04-26 13:35:08.021247] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.741 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.741 13:35:08 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:50.741 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.741 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:50.741 [2024-04-26 13:35:08.028965] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:50.741 [ 00:24:50.741 { 00:24:50.741 "allow_any_host": true, 00:24:50.741 "hosts": [], 00:24:50.741 "listen_addresses": [], 00:24:50.741 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:50.741 "subtype": "Discovery" 00:24:50.741 }, 00:24:50.741 { 00:24:50.741 "allow_any_host": true, 00:24:50.741 "hosts": [], 00:24:50.741 "listen_addresses": [ 00:24:50.741 { 00:24:50.741 "adrfam": "IPv4", 00:24:50.741 "traddr": "10.0.0.2", 00:24:50.741 "transport": "TCP", 00:24:50.741 "trsvcid": "4420", 00:24:50.741 "trtype": "TCP" 00:24:50.741 } 00:24:50.741 ], 00:24:50.741 "max_cntlid": 65519, 00:24:50.741 "max_namespaces": 2, 00:24:50.741 "min_cntlid": 1, 00:24:50.741 "model_number": "SPDK bdev Controller", 00:24:50.741 "namespaces": [ 00:24:50.741 { 00:24:50.741 "bdev_name": "Malloc0", 00:24:50.741 "name": "Malloc0", 00:24:50.741 "nguid": "51C46F30917E49FCB5963355EDB045D8", 00:24:50.741 "nsid": 1, 00:24:50.741 "uuid": "51c46f30-917e-49fc-b596-3355edb045d8" 00:24:50.741 } 00:24:50.741 ], 00:24:50.741 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.741 "serial_number": "SPDK00000000000001", 00:24:50.741 "subtype": "NVMe" 00:24:50.741 } 00:24:50.741 ] 00:24:50.741 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.741 13:35:08 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:50.741 13:35:08 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:50.741 13:35:08 -- host/aer.sh@33 -- # aerpid=80067 00:24:50.741 13:35:08 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:50.741 13:35:08 -- common/autotest_common.sh@1251 -- # local i=0 00:24:50.741 13:35:08 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:50.741 13:35:08 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:50.741 13:35:08 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:24:50.741 13:35:08 -- common/autotest_common.sh@1254 -- # i=1 00:24:50.741 13:35:08 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:50.741 13:35:08 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:50.741 13:35:08 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:24:50.741 13:35:08 -- common/autotest_common.sh@1254 -- # i=2 00:24:50.741 13:35:08 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:24:51.000 13:35:08 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:51.000 13:35:08 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:51.000 13:35:08 -- common/autotest_common.sh@1262 -- # return 0 00:24:51.000 13:35:08 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:51.000 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.000 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.000 Malloc1 00:24:51.000 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.000 13:35:08 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:51.000 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.000 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.000 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.000 13:35:08 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:51.000 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.000 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.000 Asynchronous Event Request test 00:24:51.000 Attaching to 10.0.0.2 00:24:51.000 Attached to 10.0.0.2 00:24:51.000 Registering asynchronous event callbacks... 00:24:51.000 Starting namespace attribute notice tests for all controllers... 00:24:51.000 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:51.000 aer_cb - Changed Namespace 00:24:51.000 Cleaning up... 00:24:51.000 [ 00:24:51.000 { 00:24:51.000 "allow_any_host": true, 00:24:51.000 "hosts": [], 00:24:51.000 "listen_addresses": [], 00:24:51.000 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:51.000 "subtype": "Discovery" 00:24:51.000 }, 00:24:51.000 { 00:24:51.000 "allow_any_host": true, 00:24:51.000 "hosts": [], 00:24:51.000 "listen_addresses": [ 00:24:51.000 { 00:24:51.000 "adrfam": "IPv4", 00:24:51.000 "traddr": "10.0.0.2", 00:24:51.000 "transport": "TCP", 00:24:51.000 "trsvcid": "4420", 00:24:51.000 "trtype": "TCP" 00:24:51.000 } 00:24:51.000 ], 00:24:51.000 "max_cntlid": 65519, 00:24:51.000 "max_namespaces": 2, 00:24:51.000 "min_cntlid": 1, 00:24:51.000 "model_number": "SPDK bdev Controller", 00:24:51.000 "namespaces": [ 00:24:51.000 { 00:24:51.000 "bdev_name": "Malloc0", 00:24:51.000 "name": "Malloc0", 00:24:51.000 "nguid": "51C46F30917E49FCB5963355EDB045D8", 00:24:51.000 "nsid": 1, 00:24:51.000 "uuid": "51c46f30-917e-49fc-b596-3355edb045d8" 00:24:51.000 }, 00:24:51.000 { 00:24:51.000 "bdev_name": "Malloc1", 00:24:51.000 "name": "Malloc1", 00:24:51.000 "nguid": "DC80D8BB94CC4F37A47C5FCB78FC0EFF", 00:24:51.000 "nsid": 2, 00:24:51.000 "uuid": "dc80d8bb-94cc-4f37-a47c-5fcb78fc0eff" 00:24:51.000 } 00:24:51.000 ], 00:24:51.000 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.000 "serial_number": "SPDK00000000000001", 00:24:51.000 "subtype": "NVMe" 00:24:51.000 } 00:24:51.000 ] 00:24:51.000 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.000 13:35:08 -- host/aer.sh@43 -- # wait 80067 00:24:51.000 13:35:08 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:51.000 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.000 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.000 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.000 13:35:08 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:51.000 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.000 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.000 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.000 13:35:08 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.000 13:35:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.000 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.000 13:35:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.000 13:35:08 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:51.000 13:35:08 -- host/aer.sh@51 -- # nvmftestfini 00:24:51.000 13:35:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:51.000 13:35:08 -- nvmf/common.sh@117 -- # sync 00:24:51.259 13:35:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.259 13:35:08 -- nvmf/common.sh@120 -- # set +e 00:24:51.259 13:35:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.259 13:35:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.259 rmmod nvme_tcp 00:24:51.259 rmmod nvme_fabrics 00:24:51.259 rmmod nvme_keyring 00:24:51.259 13:35:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.259 13:35:08 -- nvmf/common.sh@124 -- # set -e 00:24:51.259 13:35:08 -- nvmf/common.sh@125 -- # return 0 00:24:51.259 13:35:08 -- nvmf/common.sh@478 -- # '[' -n 80013 ']' 00:24:51.259 13:35:08 -- nvmf/common.sh@479 -- # killprocess 80013 00:24:51.259 13:35:08 -- common/autotest_common.sh@936 -- # '[' -z 80013 ']' 00:24:51.259 13:35:08 -- common/autotest_common.sh@940 -- # kill -0 80013 00:24:51.259 13:35:08 -- common/autotest_common.sh@941 -- # uname 00:24:51.259 13:35:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:51.259 13:35:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80013 00:24:51.259 13:35:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:51.259 13:35:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:51.259 killing process with pid 80013 00:24:51.259 13:35:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80013' 00:24:51.259 13:35:08 -- common/autotest_common.sh@955 -- # kill 80013 00:24:51.259 [2024-04-26 13:35:08.561748] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:51.259 13:35:08 -- common/autotest_common.sh@960 -- # wait 80013 00:24:51.518 13:35:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:51.518 13:35:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:51.518 13:35:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:51.518 13:35:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.518 13:35:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.518 13:35:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.518 13:35:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.518 13:35:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.518 13:35:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:51.518 00:24:51.518 real 0m2.513s 00:24:51.518 user 0m6.634s 00:24:51.518 sys 0m0.711s 00:24:51.518 13:35:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:51.518 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.518 ************************************ 00:24:51.518 END TEST nvmf_aer 00:24:51.518 ************************************ 00:24:51.518 13:35:08 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:51.518 13:35:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:51.518 13:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:51.518 13:35:08 -- common/autotest_common.sh@10 -- # set +x 00:24:51.776 ************************************ 00:24:51.776 START TEST nvmf_async_init 00:24:51.776 ************************************ 00:24:51.776 13:35:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:51.776 * Looking for test storage... 00:24:51.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:51.776 13:35:09 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:51.776 13:35:09 -- nvmf/common.sh@7 -- # uname -s 00:24:51.776 13:35:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.776 13:35:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.777 13:35:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.777 13:35:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.777 13:35:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.777 13:35:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.777 13:35:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.777 13:35:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.777 13:35:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.777 13:35:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.777 13:35:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:51.777 13:35:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:51.777 13:35:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.777 13:35:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.777 13:35:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:51.777 13:35:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.777 13:35:09 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:51.777 13:35:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.777 13:35:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.777 13:35:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.777 13:35:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.777 13:35:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.777 13:35:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.777 13:35:09 -- paths/export.sh@5 -- # export PATH 00:24:51.777 13:35:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.777 13:35:09 -- nvmf/common.sh@47 -- # : 0 00:24:51.777 13:35:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.777 13:35:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.777 13:35:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.777 13:35:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.777 13:35:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.777 13:35:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.777 13:35:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.777 13:35:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.777 13:35:09 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:51.777 13:35:09 -- host/async_init.sh@14 -- # null_block_size=512 00:24:51.777 13:35:09 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:51.777 13:35:09 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:51.777 13:35:09 -- host/async_init.sh@20 -- # uuidgen 00:24:51.777 13:35:09 -- host/async_init.sh@20 -- # tr -d - 00:24:51.777 13:35:09 -- host/async_init.sh@20 -- # nguid=ad7b522ffe984087b7fcf14fa0f4cbf9 00:24:51.777 13:35:09 -- host/async_init.sh@22 -- # nvmftestinit 00:24:51.777 13:35:09 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:51.777 13:35:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.777 13:35:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:51.777 13:35:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:51.777 13:35:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:51.777 13:35:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.777 13:35:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.777 13:35:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.777 13:35:09 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:51.777 13:35:09 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:51.777 13:35:09 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:51.777 13:35:09 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:51.777 13:35:09 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:51.777 13:35:09 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:51.777 13:35:09 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.777 13:35:09 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.777 13:35:09 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:51.777 13:35:09 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:51.777 13:35:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:51.777 13:35:09 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:51.777 13:35:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:51.777 13:35:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.777 13:35:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:51.777 13:35:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:51.777 13:35:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:51.777 13:35:09 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:51.777 13:35:09 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:51.777 13:35:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:51.777 Cannot find device "nvmf_tgt_br" 00:24:51.777 13:35:09 -- nvmf/common.sh@155 -- # true 00:24:51.777 13:35:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:51.777 Cannot find device "nvmf_tgt_br2" 00:24:51.777 13:35:09 -- nvmf/common.sh@156 -- # true 00:24:51.777 13:35:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:51.777 13:35:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:51.777 Cannot find device "nvmf_tgt_br" 00:24:51.777 13:35:09 -- nvmf/common.sh@158 -- # true 00:24:51.777 13:35:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:51.777 Cannot find device "nvmf_tgt_br2" 00:24:51.777 13:35:09 -- nvmf/common.sh@159 -- # true 00:24:51.777 13:35:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:52.036 13:35:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:52.036 13:35:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:52.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:52.036 13:35:09 -- nvmf/common.sh@162 -- # true 00:24:52.036 13:35:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:52.036 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:52.036 13:35:09 -- nvmf/common.sh@163 -- # true 00:24:52.036 13:35:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:52.036 13:35:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:52.036 13:35:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:52.036 13:35:09 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:52.036 13:35:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:52.036 13:35:09 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:52.036 13:35:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:52.036 13:35:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:52.036 13:35:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:52.036 13:35:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:52.036 13:35:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:52.036 13:35:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:52.036 13:35:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:52.036 13:35:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:52.036 13:35:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:52.036 13:35:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:52.036 13:35:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:52.036 13:35:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:52.036 13:35:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:52.036 13:35:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:52.036 13:35:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:52.036 13:35:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:52.036 13:35:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:52.036 13:35:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:52.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:24:52.036 00:24:52.036 --- 10.0.0.2 ping statistics --- 00:24:52.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.036 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:52.036 13:35:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:52.036 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:52.036 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:24:52.036 00:24:52.036 --- 10.0.0.3 ping statistics --- 00:24:52.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.036 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:52.036 13:35:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:52.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:52.036 00:24:52.036 --- 10.0.0.1 ping statistics --- 00:24:52.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.036 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:52.036 13:35:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.036 13:35:09 -- nvmf/common.sh@422 -- # return 0 00:24:52.036 13:35:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:52.036 13:35:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.036 13:35:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:52.036 13:35:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:52.036 13:35:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.036 13:35:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:52.036 13:35:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:52.036 13:35:09 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:52.036 13:35:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:52.036 13:35:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:52.036 13:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:52.036 13:35:09 -- nvmf/common.sh@470 -- # nvmfpid=80240 00:24:52.036 13:35:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:52.036 13:35:09 -- nvmf/common.sh@471 -- # waitforlisten 80240 00:24:52.036 13:35:09 -- common/autotest_common.sh@817 -- # '[' -z 80240 ']' 00:24:52.036 13:35:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.036 13:35:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:52.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.036 13:35:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.036 13:35:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:52.036 13:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:52.294 [2024-04-26 13:35:09.553256] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:52.294 [2024-04-26 13:35:09.553365] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.294 [2024-04-26 13:35:09.691073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.553 [2024-04-26 13:35:09.823543] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.553 [2024-04-26 13:35:09.823622] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.553 [2024-04-26 13:35:09.823637] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.553 [2024-04-26 13:35:09.823648] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.553 [2024-04-26 13:35:09.823658] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.553 [2024-04-26 13:35:09.823703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.486 13:35:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:53.486 13:35:10 -- common/autotest_common.sh@850 -- # return 0 00:24:53.486 13:35:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:53.486 13:35:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.486 13:35:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.486 13:35:10 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:53.486 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.486 [2024-04-26 13:35:10.673873] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.486 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.486 13:35:10 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:53.486 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.486 null0 00:24:53.486 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.486 13:35:10 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:53.486 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.486 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.486 13:35:10 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:53.486 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.486 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.486 13:35:10 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ad7b522ffe984087b7fcf14fa0f4cbf9 00:24:53.486 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.486 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.486 13:35:10 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:53.486 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.486 [2024-04-26 13:35:10.718068] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.486 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.486 13:35:10 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:53.486 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.486 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.744 nvme0n1 00:24:53.745 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.745 13:35:10 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:53.745 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.745 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.745 [ 00:24:53.745 { 00:24:53.745 "aliases": [ 00:24:53.745 "ad7b522f-fe98-4087-b7fc-f14fa0f4cbf9" 00:24:53.745 ], 00:24:53.745 "assigned_rate_limits": { 00:24:53.745 "r_mbytes_per_sec": 0, 00:24:53.745 "rw_ios_per_sec": 0, 00:24:53.745 "rw_mbytes_per_sec": 0, 00:24:53.745 "w_mbytes_per_sec": 0 00:24:53.745 }, 00:24:53.745 "block_size": 512, 00:24:53.745 "claimed": false, 00:24:53.745 "driver_specific": { 00:24:53.745 "mp_policy": "active_passive", 00:24:53.745 "nvme": [ 00:24:53.745 { 00:24:53.745 "ctrlr_data": { 00:24:53.745 "ana_reporting": false, 00:24:53.745 "cntlid": 1, 00:24:53.745 "firmware_revision": "24.05", 00:24:53.745 "model_number": "SPDK bdev Controller", 00:24:53.745 "multi_ctrlr": true, 00:24:53.745 "oacs": { 00:24:53.745 "firmware": 0, 00:24:53.745 "format": 0, 00:24:53.745 "ns_manage": 0, 00:24:53.745 "security": 0 00:24:53.745 }, 00:24:53.745 "serial_number": "00000000000000000000", 00:24:53.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.745 "vendor_id": "0x8086" 00:24:53.745 }, 00:24:53.745 "ns_data": { 00:24:53.745 "can_share": true, 00:24:53.745 "id": 1 00:24:53.745 }, 00:24:53.745 "trid": { 00:24:53.745 "adrfam": "IPv4", 00:24:53.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.745 "traddr": "10.0.0.2", 00:24:53.745 "trsvcid": "4420", 00:24:53.745 "trtype": "TCP" 00:24:53.745 }, 00:24:53.745 "vs": { 00:24:53.745 "nvme_version": "1.3" 00:24:53.745 } 00:24:53.745 } 00:24:53.745 ] 00:24:53.745 }, 00:24:53.745 "memory_domains": [ 00:24:53.745 { 00:24:53.745 "dma_device_id": "system", 00:24:53.745 "dma_device_type": 1 00:24:53.745 } 00:24:53.745 ], 00:24:53.745 "name": "nvme0n1", 00:24:53.745 "num_blocks": 2097152, 00:24:53.745 "product_name": "NVMe disk", 00:24:53.745 "supported_io_types": { 00:24:53.745 "abort": true, 00:24:53.745 "compare": true, 00:24:53.745 "compare_and_write": true, 00:24:53.745 "flush": true, 00:24:53.745 "nvme_admin": true, 00:24:53.745 "nvme_io": true, 00:24:53.745 "read": true, 00:24:53.745 "reset": true, 00:24:53.745 "unmap": false, 00:24:53.745 "write": true, 00:24:53.745 "write_zeroes": true 00:24:53.745 }, 00:24:53.745 "uuid": "ad7b522f-fe98-4087-b7fc-f14fa0f4cbf9", 00:24:53.745 "zoned": false 00:24:53.745 } 00:24:53.745 ] 00:24:53.745 13:35:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.745 13:35:10 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:53.745 13:35:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.745 13:35:10 -- common/autotest_common.sh@10 -- # set +x 00:24:53.745 [2024-04-26 13:35:10.992271] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:53.745 [2024-04-26 13:35:10.992440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bb0640 (9): Bad file descriptor 00:24:53.745 [2024-04-26 13:35:11.124992] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:53.745 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.745 13:35:11 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:53.745 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.745 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.745 [ 00:24:53.745 { 00:24:53.745 "aliases": [ 00:24:53.745 "ad7b522f-fe98-4087-b7fc-f14fa0f4cbf9" 00:24:53.745 ], 00:24:53.745 "assigned_rate_limits": { 00:24:53.745 "r_mbytes_per_sec": 0, 00:24:53.745 "rw_ios_per_sec": 0, 00:24:53.745 "rw_mbytes_per_sec": 0, 00:24:53.745 "w_mbytes_per_sec": 0 00:24:53.745 }, 00:24:53.745 "block_size": 512, 00:24:53.745 "claimed": false, 00:24:53.745 "driver_specific": { 00:24:53.745 "mp_policy": "active_passive", 00:24:53.745 "nvme": [ 00:24:53.745 { 00:24:53.745 "ctrlr_data": { 00:24:53.745 "ana_reporting": false, 00:24:53.745 "cntlid": 2, 00:24:53.745 "firmware_revision": "24.05", 00:24:53.745 "model_number": "SPDK bdev Controller", 00:24:53.745 "multi_ctrlr": true, 00:24:53.745 "oacs": { 00:24:53.745 "firmware": 0, 00:24:53.745 "format": 0, 00:24:53.745 "ns_manage": 0, 00:24:53.745 "security": 0 00:24:53.745 }, 00:24:53.745 "serial_number": "00000000000000000000", 00:24:53.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.745 "vendor_id": "0x8086" 00:24:53.745 }, 00:24:53.745 "ns_data": { 00:24:53.745 "can_share": true, 00:24:53.745 "id": 1 00:24:53.745 }, 00:24:53.745 "trid": { 00:24:53.745 "adrfam": "IPv4", 00:24:53.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.745 "traddr": "10.0.0.2", 00:24:53.745 "trsvcid": "4420", 00:24:53.745 "trtype": "TCP" 00:24:53.745 }, 00:24:53.745 "vs": { 00:24:53.745 "nvme_version": "1.3" 00:24:53.745 } 00:24:53.745 } 00:24:53.745 ] 00:24:53.745 }, 00:24:53.745 "memory_domains": [ 00:24:53.745 { 00:24:53.745 "dma_device_id": "system", 00:24:53.745 "dma_device_type": 1 00:24:53.745 } 00:24:53.745 ], 00:24:53.745 "name": "nvme0n1", 00:24:53.745 "num_blocks": 2097152, 00:24:53.745 "product_name": "NVMe disk", 00:24:53.745 "supported_io_types": { 00:24:53.745 "abort": true, 00:24:53.745 "compare": true, 00:24:53.745 "compare_and_write": true, 00:24:53.745 "flush": true, 00:24:53.745 "nvme_admin": true, 00:24:53.745 "nvme_io": true, 00:24:53.745 "read": true, 00:24:53.745 "reset": true, 00:24:53.745 "unmap": false, 00:24:53.745 "write": true, 00:24:53.745 "write_zeroes": true 00:24:53.745 }, 00:24:53.745 "uuid": "ad7b522f-fe98-4087-b7fc-f14fa0f4cbf9", 00:24:53.745 "zoned": false 00:24:53.745 } 00:24:53.745 ] 00:24:53.745 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.745 13:35:11 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.745 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.745 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:53.745 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:53.745 13:35:11 -- host/async_init.sh@53 -- # mktemp 00:24:53.745 13:35:11 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XWR1jnqALk 00:24:53.745 13:35:11 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:53.745 13:35:11 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XWR1jnqALk 00:24:53.745 13:35:11 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:53.745 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:53.745 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.004 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.004 13:35:11 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:54.004 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.004 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.004 [2024-04-26 13:35:11.196445] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.004 [2024-04-26 13:35:11.196683] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.004 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.004 13:35:11 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XWR1jnqALk 00:24:54.004 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.004 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.004 [2024-04-26 13:35:11.204443] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:54.004 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.004 13:35:11 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XWR1jnqALk 00:24:54.004 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.004 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.004 [2024-04-26 13:35:11.212445] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:54.004 [2024-04-26 13:35:11.212546] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:54.004 nvme0n1 00:24:54.004 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.004 13:35:11 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:54.004 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.004 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.004 [ 00:24:54.004 { 00:24:54.004 "aliases": [ 00:24:54.004 "ad7b522f-fe98-4087-b7fc-f14fa0f4cbf9" 00:24:54.004 ], 00:24:54.004 "assigned_rate_limits": { 00:24:54.004 "r_mbytes_per_sec": 0, 00:24:54.004 "rw_ios_per_sec": 0, 00:24:54.004 "rw_mbytes_per_sec": 0, 00:24:54.004 "w_mbytes_per_sec": 0 00:24:54.004 }, 00:24:54.004 "block_size": 512, 00:24:54.004 "claimed": false, 00:24:54.004 "driver_specific": { 00:24:54.004 "mp_policy": "active_passive", 00:24:54.004 "nvme": [ 00:24:54.004 { 00:24:54.004 "ctrlr_data": { 00:24:54.004 "ana_reporting": false, 00:24:54.004 "cntlid": 3, 00:24:54.004 "firmware_revision": "24.05", 00:24:54.004 "model_number": "SPDK bdev Controller", 00:24:54.004 "multi_ctrlr": true, 00:24:54.004 "oacs": { 00:24:54.004 "firmware": 0, 00:24:54.004 "format": 0, 00:24:54.004 "ns_manage": 0, 00:24:54.004 "security": 0 00:24:54.004 }, 00:24:54.004 "serial_number": "00000000000000000000", 00:24:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.004 "vendor_id": "0x8086" 00:24:54.004 }, 00:24:54.004 "ns_data": { 00:24:54.004 "can_share": true, 00:24:54.004 "id": 1 00:24:54.004 }, 00:24:54.004 "trid": { 00:24:54.004 "adrfam": "IPv4", 00:24:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.004 "traddr": "10.0.0.2", 00:24:54.004 "trsvcid": "4421", 00:24:54.004 "trtype": "TCP" 00:24:54.004 }, 00:24:54.004 "vs": { 00:24:54.004 "nvme_version": "1.3" 00:24:54.004 } 00:24:54.004 } 00:24:54.004 ] 00:24:54.004 }, 00:24:54.004 "memory_domains": [ 00:24:54.004 { 00:24:54.004 "dma_device_id": "system", 00:24:54.004 "dma_device_type": 1 00:24:54.004 } 00:24:54.004 ], 00:24:54.004 "name": "nvme0n1", 00:24:54.004 "num_blocks": 2097152, 00:24:54.004 "product_name": "NVMe disk", 00:24:54.004 "supported_io_types": { 00:24:54.004 "abort": true, 00:24:54.004 "compare": true, 00:24:54.004 "compare_and_write": true, 00:24:54.004 "flush": true, 00:24:54.004 "nvme_admin": true, 00:24:54.004 "nvme_io": true, 00:24:54.004 "read": true, 00:24:54.004 "reset": true, 00:24:54.004 "unmap": false, 00:24:54.004 "write": true, 00:24:54.004 "write_zeroes": true 00:24:54.004 }, 00:24:54.004 "uuid": "ad7b522f-fe98-4087-b7fc-f14fa0f4cbf9", 00:24:54.004 "zoned": false 00:24:54.004 } 00:24:54.004 ] 00:24:54.004 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.004 13:35:11 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.004 13:35:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.004 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.004 13:35:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:54.004 13:35:11 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.XWR1jnqALk 00:24:54.004 13:35:11 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:54.004 13:35:11 -- host/async_init.sh@78 -- # nvmftestfini 00:24:54.004 13:35:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:54.004 13:35:11 -- nvmf/common.sh@117 -- # sync 00:24:54.004 13:35:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.004 13:35:11 -- nvmf/common.sh@120 -- # set +e 00:24:54.004 13:35:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.004 13:35:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.004 rmmod nvme_tcp 00:24:54.004 rmmod nvme_fabrics 00:24:54.004 rmmod nvme_keyring 00:24:54.004 13:35:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.004 13:35:11 -- nvmf/common.sh@124 -- # set -e 00:24:54.004 13:35:11 -- nvmf/common.sh@125 -- # return 0 00:24:54.004 13:35:11 -- nvmf/common.sh@478 -- # '[' -n 80240 ']' 00:24:54.004 13:35:11 -- nvmf/common.sh@479 -- # killprocess 80240 00:24:54.004 13:35:11 -- common/autotest_common.sh@936 -- # '[' -z 80240 ']' 00:24:54.004 13:35:11 -- common/autotest_common.sh@940 -- # kill -0 80240 00:24:54.004 13:35:11 -- common/autotest_common.sh@941 -- # uname 00:24:54.004 13:35:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.004 13:35:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80240 00:24:54.262 13:35:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:54.262 13:35:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:54.262 killing process with pid 80240 00:24:54.262 13:35:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80240' 00:24:54.263 13:35:11 -- common/autotest_common.sh@955 -- # kill 80240 00:24:54.263 [2024-04-26 13:35:11.464432] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:54.263 [2024-04-26 13:35:11.464474] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:54.263 13:35:11 -- common/autotest_common.sh@960 -- # wait 80240 00:24:54.263 13:35:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:54.263 13:35:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:54.263 13:35:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:54.263 13:35:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.263 13:35:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.263 13:35:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.263 13:35:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.263 13:35:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.521 13:35:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:54.521 00:24:54.521 real 0m2.756s 00:24:54.521 user 0m2.663s 00:24:54.521 sys 0m0.647s 00:24:54.521 13:35:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:54.521 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.521 ************************************ 00:24:54.521 END TEST nvmf_async_init 00:24:54.521 ************************************ 00:24:54.521 13:35:11 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:54.521 13:35:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:54.521 13:35:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:54.521 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.521 ************************************ 00:24:54.521 START TEST dma 00:24:54.521 ************************************ 00:24:54.521 13:35:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:54.521 * Looking for test storage... 00:24:54.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:54.521 13:35:11 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:54.521 13:35:11 -- nvmf/common.sh@7 -- # uname -s 00:24:54.521 13:35:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.521 13:35:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.521 13:35:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.521 13:35:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.521 13:35:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.521 13:35:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.521 13:35:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.521 13:35:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.521 13:35:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.521 13:35:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.521 13:35:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:54.521 13:35:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:54.521 13:35:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.521 13:35:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.521 13:35:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:54.521 13:35:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.521 13:35:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:54.521 13:35:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.521 13:35:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.521 13:35:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.521 13:35:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.521 13:35:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.521 13:35:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.521 13:35:11 -- paths/export.sh@5 -- # export PATH 00:24:54.521 13:35:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.521 13:35:11 -- nvmf/common.sh@47 -- # : 0 00:24:54.521 13:35:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.521 13:35:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.521 13:35:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.521 13:35:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.521 13:35:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.521 13:35:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.521 13:35:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.521 13:35:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.780 13:35:11 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:54.780 13:35:11 -- host/dma.sh@13 -- # exit 0 00:24:54.780 00:24:54.780 real 0m0.110s 00:24:54.780 user 0m0.051s 00:24:54.780 sys 0m0.064s 00:24:54.780 13:35:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:54.780 13:35:11 -- common/autotest_common.sh@10 -- # set +x 00:24:54.780 ************************************ 00:24:54.780 END TEST dma 00:24:54.780 ************************************ 00:24:54.780 13:35:12 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:54.780 13:35:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:54.780 13:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:54.780 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:54.780 ************************************ 00:24:54.780 START TEST nvmf_identify 00:24:54.780 ************************************ 00:24:54.780 13:35:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:54.780 * Looking for test storage... 00:24:54.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:54.780 13:35:12 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:54.780 13:35:12 -- nvmf/common.sh@7 -- # uname -s 00:24:54.780 13:35:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.780 13:35:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.780 13:35:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.780 13:35:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.780 13:35:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.780 13:35:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.780 13:35:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.780 13:35:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.780 13:35:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.780 13:35:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.780 13:35:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:54.780 13:35:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:54.780 13:35:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.780 13:35:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.780 13:35:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:54.780 13:35:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.780 13:35:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:54.780 13:35:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.780 13:35:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.780 13:35:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.780 13:35:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.780 13:35:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.780 13:35:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.780 13:35:12 -- paths/export.sh@5 -- # export PATH 00:24:54.780 13:35:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.780 13:35:12 -- nvmf/common.sh@47 -- # : 0 00:24:54.780 13:35:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.038 13:35:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.038 13:35:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.038 13:35:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.039 13:35:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.039 13:35:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.039 13:35:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.039 13:35:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.039 13:35:12 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:55.039 13:35:12 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:55.039 13:35:12 -- host/identify.sh@14 -- # nvmftestinit 00:24:55.039 13:35:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:55.039 13:35:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.039 13:35:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:55.039 13:35:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:55.039 13:35:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:55.039 13:35:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.039 13:35:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.039 13:35:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.039 13:35:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:55.039 13:35:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:55.039 13:35:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:55.039 13:35:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:55.039 13:35:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:55.039 13:35:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:55.039 13:35:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.039 13:35:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.039 13:35:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:55.039 13:35:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:55.039 13:35:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:55.039 13:35:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:55.039 13:35:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:55.039 13:35:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.039 13:35:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:55.039 13:35:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:55.039 13:35:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:55.039 13:35:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:55.039 13:35:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:55.039 13:35:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:55.039 Cannot find device "nvmf_tgt_br" 00:24:55.039 13:35:12 -- nvmf/common.sh@155 -- # true 00:24:55.039 13:35:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:55.039 Cannot find device "nvmf_tgt_br2" 00:24:55.039 13:35:12 -- nvmf/common.sh@156 -- # true 00:24:55.039 13:35:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:55.039 13:35:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:55.039 Cannot find device "nvmf_tgt_br" 00:24:55.039 13:35:12 -- nvmf/common.sh@158 -- # true 00:24:55.039 13:35:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:55.039 Cannot find device "nvmf_tgt_br2" 00:24:55.039 13:35:12 -- nvmf/common.sh@159 -- # true 00:24:55.039 13:35:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:55.039 13:35:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:55.039 13:35:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:55.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:55.039 13:35:12 -- nvmf/common.sh@162 -- # true 00:24:55.039 13:35:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:55.039 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:55.039 13:35:12 -- nvmf/common.sh@163 -- # true 00:24:55.039 13:35:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:55.039 13:35:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:55.039 13:35:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:55.039 13:35:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:55.039 13:35:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:55.039 13:35:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:55.039 13:35:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:55.039 13:35:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:55.039 13:35:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:55.039 13:35:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:55.039 13:35:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:55.298 13:35:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:55.298 13:35:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:55.298 13:35:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:55.298 13:35:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:55.298 13:35:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:55.298 13:35:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:55.298 13:35:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:55.298 13:35:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:55.298 13:35:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:55.298 13:35:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:55.298 13:35:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:55.298 13:35:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:55.298 13:35:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:55.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:55.298 00:24:55.298 --- 10.0.0.2 ping statistics --- 00:24:55.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.298 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:55.298 13:35:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:55.298 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:55.298 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:24:55.298 00:24:55.298 --- 10.0.0.3 ping statistics --- 00:24:55.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.298 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:55.298 13:35:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:55.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:24:55.298 00:24:55.298 --- 10.0.0.1 ping statistics --- 00:24:55.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.298 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:55.298 13:35:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.298 13:35:12 -- nvmf/common.sh@422 -- # return 0 00:24:55.298 13:35:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:55.298 13:35:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.298 13:35:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:55.298 13:35:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:55.298 13:35:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.298 13:35:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:55.298 13:35:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:55.298 13:35:12 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:55.298 13:35:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:55.298 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:55.298 13:35:12 -- host/identify.sh@19 -- # nvmfpid=80521 00:24:55.298 13:35:12 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:55.298 13:35:12 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.298 13:35:12 -- host/identify.sh@23 -- # waitforlisten 80521 00:24:55.298 13:35:12 -- common/autotest_common.sh@817 -- # '[' -z 80521 ']' 00:24:55.298 13:35:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.298 13:35:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:55.298 13:35:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.298 13:35:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:55.298 13:35:12 -- common/autotest_common.sh@10 -- # set +x 00:24:55.298 [2024-04-26 13:35:12.686853] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:55.298 [2024-04-26 13:35:12.686962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.556 [2024-04-26 13:35:12.830386] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.556 [2024-04-26 13:35:12.967313] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.556 [2024-04-26 13:35:12.967381] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.556 [2024-04-26 13:35:12.967396] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.556 [2024-04-26 13:35:12.967407] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.556 [2024-04-26 13:35:12.967417] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.556 [2024-04-26 13:35:12.967594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.556 [2024-04-26 13:35:12.967968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.556 [2024-04-26 13:35:12.968613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.556 [2024-04-26 13:35:12.968647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.490 13:35:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:56.490 13:35:13 -- common/autotest_common.sh@850 -- # return 0 00:24:56.490 13:35:13 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.490 13:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 [2024-04-26 13:35:13.712855] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.490 13:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.490 13:35:13 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:56.490 13:35:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 13:35:13 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:56.490 13:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 Malloc0 00:24:56.490 13:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.490 13:35:13 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.490 13:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 13:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.490 13:35:13 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:56.490 13:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 13:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.490 13:35:13 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.490 13:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 [2024-04-26 13:35:13.824181] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.490 13:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.490 13:35:13 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:56.490 13:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 13:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.490 13:35:13 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:56.490 13:35:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:56.490 13:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:56.490 [2024-04-26 13:35:13.839848] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:56.490 [ 00:24:56.490 { 00:24:56.490 "allow_any_host": true, 00:24:56.490 "hosts": [], 00:24:56.490 "listen_addresses": [ 00:24:56.490 { 00:24:56.490 "adrfam": "IPv4", 00:24:56.490 "traddr": "10.0.0.2", 00:24:56.490 "transport": "TCP", 00:24:56.490 "trsvcid": "4420", 00:24:56.490 "trtype": "TCP" 00:24:56.490 } 00:24:56.490 ], 00:24:56.490 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:56.490 "subtype": "Discovery" 00:24:56.490 }, 00:24:56.490 { 00:24:56.490 "allow_any_host": true, 00:24:56.490 "hosts": [], 00:24:56.490 "listen_addresses": [ 00:24:56.490 { 00:24:56.490 "adrfam": "IPv4", 00:24:56.490 "traddr": "10.0.0.2", 00:24:56.490 "transport": "TCP", 00:24:56.490 "trsvcid": "4420", 00:24:56.490 "trtype": "TCP" 00:24:56.490 } 00:24:56.490 ], 00:24:56.490 "max_cntlid": 65519, 00:24:56.490 "max_namespaces": 32, 00:24:56.490 "min_cntlid": 1, 00:24:56.490 "model_number": "SPDK bdev Controller", 00:24:56.490 "namespaces": [ 00:24:56.490 { 00:24:56.490 "bdev_name": "Malloc0", 00:24:56.490 "eui64": "ABCDEF0123456789", 00:24:56.490 "name": "Malloc0", 00:24:56.490 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:56.490 "nsid": 1, 00:24:56.490 "uuid": "9392e5fb-3a7e-4f40-83d5-fb1e2dadd090" 00:24:56.490 } 00:24:56.490 ], 00:24:56.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.490 "serial_number": "SPDK00000000000001", 00:24:56.490 "subtype": "NVMe" 00:24:56.490 } 00:24:56.490 ] 00:24:56.490 13:35:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:56.491 13:35:13 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:56.491 [2024-04-26 13:35:13.871174] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:56.491 [2024-04-26 13:35:13.871232] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80579 ] 00:24:56.752 [2024-04-26 13:35:14.010165] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:56.752 [2024-04-26 13:35:14.010303] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:56.752 [2024-04-26 13:35:14.010313] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:56.752 [2024-04-26 13:35:14.010331] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:56.752 [2024-04-26 13:35:14.010346] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:56.752 [2024-04-26 13:35:14.010533] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:56.752 [2024-04-26 13:35:14.010589] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8e5280 0 00:24:56.752 [2024-04-26 13:35:14.014815] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:56.752 [2024-04-26 13:35:14.014840] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:56.752 [2024-04-26 13:35:14.014846] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:56.752 [2024-04-26 13:35:14.014850] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:56.752 [2024-04-26 13:35:14.014906] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.014914] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.014919] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.752 [2024-04-26 13:35:14.014937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:56.752 [2024-04-26 13:35:14.014969] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.752 [2024-04-26 13:35:14.022828] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.752 [2024-04-26 13:35:14.022850] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.752 [2024-04-26 13:35:14.022856] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.022861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.752 [2024-04-26 13:35:14.022876] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:56.752 [2024-04-26 13:35:14.022885] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:56.752 [2024-04-26 13:35:14.022891] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:56.752 [2024-04-26 13:35:14.022911] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.022918] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.022922] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.752 [2024-04-26 13:35:14.022932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.752 [2024-04-26 13:35:14.022960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.752 [2024-04-26 13:35:14.023047] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.752 [2024-04-26 13:35:14.023054] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.752 [2024-04-26 13:35:14.023058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023062] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.752 [2024-04-26 13:35:14.023073] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:56.752 [2024-04-26 13:35:14.023082] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:56.752 [2024-04-26 13:35:14.023090] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023095] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023099] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.752 [2024-04-26 13:35:14.023107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.752 [2024-04-26 13:35:14.023127] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.752 [2024-04-26 13:35:14.023188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.752 [2024-04-26 13:35:14.023195] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.752 [2024-04-26 13:35:14.023199] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023203] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.752 [2024-04-26 13:35:14.023210] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:56.752 [2024-04-26 13:35:14.023219] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:56.752 [2024-04-26 13:35:14.023226] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023231] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023235] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.752 [2024-04-26 13:35:14.023242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.752 [2024-04-26 13:35:14.023261] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.752 [2024-04-26 13:35:14.023322] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.752 [2024-04-26 13:35:14.023329] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.752 [2024-04-26 13:35:14.023333] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023337] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.752 [2024-04-26 13:35:14.023344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:56.752 [2024-04-26 13:35:14.023354] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023363] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.752 [2024-04-26 13:35:14.023370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.752 [2024-04-26 13:35:14.023388] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.752 [2024-04-26 13:35:14.023444] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.752 [2024-04-26 13:35:14.023451] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.752 [2024-04-26 13:35:14.023455] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023459] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.752 [2024-04-26 13:35:14.023465] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:56.752 [2024-04-26 13:35:14.023470] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:56.752 [2024-04-26 13:35:14.023479] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:56.752 [2024-04-26 13:35:14.023589] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:56.752 [2024-04-26 13:35:14.023595] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:56.752 [2024-04-26 13:35:14.023605] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023609] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023613] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.752 [2024-04-26 13:35:14.023621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.752 [2024-04-26 13:35:14.023639] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.752 [2024-04-26 13:35:14.023698] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.752 [2024-04-26 13:35:14.023705] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.752 [2024-04-26 13:35:14.023708] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023713] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.752 [2024-04-26 13:35:14.023718] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:56.752 [2024-04-26 13:35:14.023728] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023733] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023737] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.752 [2024-04-26 13:35:14.023745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.752 [2024-04-26 13:35:14.023762] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.752 [2024-04-26 13:35:14.023837] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.752 [2024-04-26 13:35:14.023846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.752 [2024-04-26 13:35:14.023850] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.752 [2024-04-26 13:35:14.023854] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.752 [2024-04-26 13:35:14.023859] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:56.753 [2024-04-26 13:35:14.023865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:56.753 [2024-04-26 13:35:14.023873] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:56.753 [2024-04-26 13:35:14.023884] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:56.753 [2024-04-26 13:35:14.023896] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.023900] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.023909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.753 [2024-04-26 13:35:14.023930] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.753 [2024-04-26 13:35:14.024044] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.753 [2024-04-26 13:35:14.024051] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.753 [2024-04-26 13:35:14.024055] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024059] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e5280): datao=0, datal=4096, cccid=0 00:24:56.753 [2024-04-26 13:35:14.024065] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92d940) on tqpair(0x8e5280): expected_datao=0, payload_size=4096 00:24:56.753 [2024-04-26 13:35:14.024070] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024080] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024085] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024094] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.753 [2024-04-26 13:35:14.024101] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.753 [2024-04-26 13:35:14.024105] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024109] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.753 [2024-04-26 13:35:14.024118] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:56.753 [2024-04-26 13:35:14.024124] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:56.753 [2024-04-26 13:35:14.024129] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:56.753 [2024-04-26 13:35:14.024140] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:56.753 [2024-04-26 13:35:14.024146] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:56.753 [2024-04-26 13:35:14.024151] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:56.753 [2024-04-26 13:35:14.024161] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:56.753 [2024-04-26 13:35:14.024169] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024178] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:56.753 [2024-04-26 13:35:14.024208] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.753 [2024-04-26 13:35:14.024278] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.753 [2024-04-26 13:35:14.024285] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.753 [2024-04-26 13:35:14.024289] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024294] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92d940) on tqpair=0x8e5280 00:24:56.753 [2024-04-26 13:35:14.024303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024311] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.753 [2024-04-26 13:35:14.024325] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024329] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024333] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.753 [2024-04-26 13:35:14.024345] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024350] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024353] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.753 [2024-04-26 13:35:14.024366] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024370] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024374] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.753 [2024-04-26 13:35:14.024385] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:56.753 [2024-04-26 13:35:14.024399] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:56.753 [2024-04-26 13:35:14.024407] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024411] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.753 [2024-04-26 13:35:14.024440] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92d940, cid 0, qid 0 00:24:56.753 [2024-04-26 13:35:14.024447] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92daa0, cid 1, qid 0 00:24:56.753 [2024-04-26 13:35:14.024452] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dc00, cid 2, qid 0 00:24:56.753 [2024-04-26 13:35:14.024457] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.753 [2024-04-26 13:35:14.024462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dec0, cid 4, qid 0 00:24:56.753 [2024-04-26 13:35:14.024572] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.753 [2024-04-26 13:35:14.024579] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.753 [2024-04-26 13:35:14.024583] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024587] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dec0) on tqpair=0x8e5280 00:24:56.753 [2024-04-26 13:35:14.024593] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:56.753 [2024-04-26 13:35:14.024599] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:56.753 [2024-04-26 13:35:14.024611] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.753 [2024-04-26 13:35:14.024643] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dec0, cid 4, qid 0 00:24:56.753 [2024-04-26 13:35:14.024711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.753 [2024-04-26 13:35:14.024718] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.753 [2024-04-26 13:35:14.024722] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024726] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e5280): datao=0, datal=4096, cccid=4 00:24:56.753 [2024-04-26 13:35:14.024731] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92dec0) on tqpair(0x8e5280): expected_datao=0, payload_size=4096 00:24:56.753 [2024-04-26 13:35:14.024736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024743] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024748] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024756] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.753 [2024-04-26 13:35:14.024762] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.753 [2024-04-26 13:35:14.024766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dec0) on tqpair=0x8e5280 00:24:56.753 [2024-04-26 13:35:14.024797] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:56.753 [2024-04-26 13:35:14.024822] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024828] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.753 [2024-04-26 13:35:14.024844] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024848] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.024852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8e5280) 00:24:56.753 [2024-04-26 13:35:14.024858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.753 [2024-04-26 13:35:14.024887] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dec0, cid 4, qid 0 00:24:56.753 [2024-04-26 13:35:14.024896] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92e020, cid 5, qid 0 00:24:56.753 [2024-04-26 13:35:14.025006] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.753 [2024-04-26 13:35:14.025014] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.753 [2024-04-26 13:35:14.025018] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.753 [2024-04-26 13:35:14.025022] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e5280): datao=0, datal=1024, cccid=4 00:24:56.753 [2024-04-26 13:35:14.025027] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92dec0) on tqpair(0x8e5280): expected_datao=0, payload_size=1024 00:24:56.753 [2024-04-26 13:35:14.025031] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.025039] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.025043] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.025049] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.754 [2024-04-26 13:35:14.025055] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.754 [2024-04-26 13:35:14.025058] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.025063] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92e020) on tqpair=0x8e5280 00:24:56.754 [2024-04-26 13:35:14.069841] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.754 [2024-04-26 13:35:14.069896] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.754 [2024-04-26 13:35:14.069903] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.069909] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dec0) on tqpair=0x8e5280 00:24:56.754 [2024-04-26 13:35:14.069959] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.069967] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e5280) 00:24:56.754 [2024-04-26 13:35:14.069985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.754 [2024-04-26 13:35:14.070028] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dec0, cid 4, qid 0 00:24:56.754 [2024-04-26 13:35:14.070202] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.754 [2024-04-26 13:35:14.070210] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.754 [2024-04-26 13:35:14.070215] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070219] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e5280): datao=0, datal=3072, cccid=4 00:24:56.754 [2024-04-26 13:35:14.070224] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92dec0) on tqpair(0x8e5280): expected_datao=0, payload_size=3072 00:24:56.754 [2024-04-26 13:35:14.070230] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070241] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070246] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.754 [2024-04-26 13:35:14.070262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.754 [2024-04-26 13:35:14.070266] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070271] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dec0) on tqpair=0x8e5280 00:24:56.754 [2024-04-26 13:35:14.070283] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070298] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8e5280) 00:24:56.754 [2024-04-26 13:35:14.070307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.754 [2024-04-26 13:35:14.070338] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dec0, cid 4, qid 0 00:24:56.754 [2024-04-26 13:35:14.070419] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:56.754 [2024-04-26 13:35:14.070426] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:56.754 [2024-04-26 13:35:14.070430] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070434] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8e5280): datao=0, datal=8, cccid=4 00:24:56.754 [2024-04-26 13:35:14.070439] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x92dec0) on tqpair(0x8e5280): expected_datao=0, payload_size=8 00:24:56.754 [2024-04-26 13:35:14.070444] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070451] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.070455] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.110878] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.754 [2024-04-26 13:35:14.110908] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.754 [2024-04-26 13:35:14.110914] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.754 [2024-04-26 13:35:14.110919] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dec0) on tqpair=0x8e5280 00:24:56.754 ===================================================== 00:24:56.754 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:56.754 ===================================================== 00:24:56.754 Controller Capabilities/Features 00:24:56.754 ================================ 00:24:56.754 Vendor ID: 0000 00:24:56.754 Subsystem Vendor ID: 0000 00:24:56.754 Serial Number: .................... 00:24:56.754 Model Number: ........................................ 00:24:56.754 Firmware Version: 24.05 00:24:56.754 Recommended Arb Burst: 0 00:24:56.754 IEEE OUI Identifier: 00 00 00 00:24:56.754 Multi-path I/O 00:24:56.754 May have multiple subsystem ports: No 00:24:56.754 May have multiple controllers: No 00:24:56.754 Associated with SR-IOV VF: No 00:24:56.754 Max Data Transfer Size: 131072 00:24:56.754 Max Number of Namespaces: 0 00:24:56.754 Max Number of I/O Queues: 1024 00:24:56.754 NVMe Specification Version (VS): 1.3 00:24:56.754 NVMe Specification Version (Identify): 1.3 00:24:56.754 Maximum Queue Entries: 128 00:24:56.754 Contiguous Queues Required: Yes 00:24:56.754 Arbitration Mechanisms Supported 00:24:56.754 Weighted Round Robin: Not Supported 00:24:56.754 Vendor Specific: Not Supported 00:24:56.754 Reset Timeout: 15000 ms 00:24:56.754 Doorbell Stride: 4 bytes 00:24:56.754 NVM Subsystem Reset: Not Supported 00:24:56.754 Command Sets Supported 00:24:56.754 NVM Command Set: Supported 00:24:56.754 Boot Partition: Not Supported 00:24:56.754 Memory Page Size Minimum: 4096 bytes 00:24:56.754 Memory Page Size Maximum: 4096 bytes 00:24:56.754 Persistent Memory Region: Not Supported 00:24:56.754 Optional Asynchronous Events Supported 00:24:56.754 Namespace Attribute Notices: Not Supported 00:24:56.754 Firmware Activation Notices: Not Supported 00:24:56.754 ANA Change Notices: Not Supported 00:24:56.754 PLE Aggregate Log Change Notices: Not Supported 00:24:56.754 LBA Status Info Alert Notices: Not Supported 00:24:56.754 EGE Aggregate Log Change Notices: Not Supported 00:24:56.754 Normal NVM Subsystem Shutdown event: Not Supported 00:24:56.754 Zone Descriptor Change Notices: Not Supported 00:24:56.754 Discovery Log Change Notices: Supported 00:24:56.754 Controller Attributes 00:24:56.754 128-bit Host Identifier: Not Supported 00:24:56.754 Non-Operational Permissive Mode: Not Supported 00:24:56.754 NVM Sets: Not Supported 00:24:56.754 Read Recovery Levels: Not Supported 00:24:56.754 Endurance Groups: Not Supported 00:24:56.754 Predictable Latency Mode: Not Supported 00:24:56.754 Traffic Based Keep ALive: Not Supported 00:24:56.754 Namespace Granularity: Not Supported 00:24:56.754 SQ Associations: Not Supported 00:24:56.754 UUID List: Not Supported 00:24:56.754 Multi-Domain Subsystem: Not Supported 00:24:56.754 Fixed Capacity Management: Not Supported 00:24:56.754 Variable Capacity Management: Not Supported 00:24:56.754 Delete Endurance Group: Not Supported 00:24:56.754 Delete NVM Set: Not Supported 00:24:56.754 Extended LBA Formats Supported: Not Supported 00:24:56.754 Flexible Data Placement Supported: Not Supported 00:24:56.754 00:24:56.754 Controller Memory Buffer Support 00:24:56.754 ================================ 00:24:56.754 Supported: No 00:24:56.754 00:24:56.754 Persistent Memory Region Support 00:24:56.754 ================================ 00:24:56.754 Supported: No 00:24:56.754 00:24:56.754 Admin Command Set Attributes 00:24:56.754 ============================ 00:24:56.754 Security Send/Receive: Not Supported 00:24:56.754 Format NVM: Not Supported 00:24:56.754 Firmware Activate/Download: Not Supported 00:24:56.754 Namespace Management: Not Supported 00:24:56.754 Device Self-Test: Not Supported 00:24:56.754 Directives: Not Supported 00:24:56.754 NVMe-MI: Not Supported 00:24:56.754 Virtualization Management: Not Supported 00:24:56.754 Doorbell Buffer Config: Not Supported 00:24:56.754 Get LBA Status Capability: Not Supported 00:24:56.754 Command & Feature Lockdown Capability: Not Supported 00:24:56.754 Abort Command Limit: 1 00:24:56.754 Async Event Request Limit: 4 00:24:56.754 Number of Firmware Slots: N/A 00:24:56.754 Firmware Slot 1 Read-Only: N/A 00:24:56.754 Firmware Activation Without Reset: N/A 00:24:56.754 Multiple Update Detection Support: N/A 00:24:56.754 Firmware Update Granularity: No Information Provided 00:24:56.754 Per-Namespace SMART Log: No 00:24:56.754 Asymmetric Namespace Access Log Page: Not Supported 00:24:56.754 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:56.754 Command Effects Log Page: Not Supported 00:24:56.754 Get Log Page Extended Data: Supported 00:24:56.754 Telemetry Log Pages: Not Supported 00:24:56.754 Persistent Event Log Pages: Not Supported 00:24:56.754 Supported Log Pages Log Page: May Support 00:24:56.754 Commands Supported & Effects Log Page: Not Supported 00:24:56.754 Feature Identifiers & Effects Log Page:May Support 00:24:56.754 NVMe-MI Commands & Effects Log Page: May Support 00:24:56.754 Data Area 4 for Telemetry Log: Not Supported 00:24:56.754 Error Log Page Entries Supported: 128 00:24:56.754 Keep Alive: Not Supported 00:24:56.754 00:24:56.754 NVM Command Set Attributes 00:24:56.754 ========================== 00:24:56.754 Submission Queue Entry Size 00:24:56.754 Max: 1 00:24:56.754 Min: 1 00:24:56.754 Completion Queue Entry Size 00:24:56.754 Max: 1 00:24:56.754 Min: 1 00:24:56.754 Number of Namespaces: 0 00:24:56.754 Compare Command: Not Supported 00:24:56.754 Write Uncorrectable Command: Not Supported 00:24:56.754 Dataset Management Command: Not Supported 00:24:56.754 Write Zeroes Command: Not Supported 00:24:56.754 Set Features Save Field: Not Supported 00:24:56.755 Reservations: Not Supported 00:24:56.755 Timestamp: Not Supported 00:24:56.755 Copy: Not Supported 00:24:56.755 Volatile Write Cache: Not Present 00:24:56.755 Atomic Write Unit (Normal): 1 00:24:56.755 Atomic Write Unit (PFail): 1 00:24:56.755 Atomic Compare & Write Unit: 1 00:24:56.755 Fused Compare & Write: Supported 00:24:56.755 Scatter-Gather List 00:24:56.755 SGL Command Set: Supported 00:24:56.755 SGL Keyed: Supported 00:24:56.755 SGL Bit Bucket Descriptor: Not Supported 00:24:56.755 SGL Metadata Pointer: Not Supported 00:24:56.755 Oversized SGL: Not Supported 00:24:56.755 SGL Metadata Address: Not Supported 00:24:56.755 SGL Offset: Supported 00:24:56.755 Transport SGL Data Block: Not Supported 00:24:56.755 Replay Protected Memory Block: Not Supported 00:24:56.755 00:24:56.755 Firmware Slot Information 00:24:56.755 ========================= 00:24:56.755 Active slot: 0 00:24:56.755 00:24:56.755 00:24:56.755 Error Log 00:24:56.755 ========= 00:24:56.755 00:24:56.755 Active Namespaces 00:24:56.755 ================= 00:24:56.755 Discovery Log Page 00:24:56.755 ================== 00:24:56.755 Generation Counter: 2 00:24:56.755 Number of Records: 2 00:24:56.755 Record Format: 0 00:24:56.755 00:24:56.755 Discovery Log Entry 0 00:24:56.755 ---------------------- 00:24:56.755 Transport Type: 3 (TCP) 00:24:56.755 Address Family: 1 (IPv4) 00:24:56.755 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:56.755 Entry Flags: 00:24:56.755 Duplicate Returned Information: 1 00:24:56.755 Explicit Persistent Connection Support for Discovery: 1 00:24:56.755 Transport Requirements: 00:24:56.755 Secure Channel: Not Required 00:24:56.755 Port ID: 0 (0x0000) 00:24:56.755 Controller ID: 65535 (0xffff) 00:24:56.755 Admin Max SQ Size: 128 00:24:56.755 Transport Service Identifier: 4420 00:24:56.755 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:56.755 Transport Address: 10.0.0.2 00:24:56.755 Discovery Log Entry 1 00:24:56.755 ---------------------- 00:24:56.755 Transport Type: 3 (TCP) 00:24:56.755 Address Family: 1 (IPv4) 00:24:56.755 Subsystem Type: 2 (NVM Subsystem) 00:24:56.755 Entry Flags: 00:24:56.755 Duplicate Returned Information: 0 00:24:56.755 Explicit Persistent Connection Support for Discovery: 0 00:24:56.755 Transport Requirements: 00:24:56.755 Secure Channel: Not Required 00:24:56.755 Port ID: 0 (0x0000) 00:24:56.755 Controller ID: 65535 (0xffff) 00:24:56.755 Admin Max SQ Size: 128 00:24:56.755 Transport Service Identifier: 4420 00:24:56.755 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:56.755 Transport Address: 10.0.0.2 [2024-04-26 13:35:14.111059] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:56.755 [2024-04-26 13:35:14.111080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.755 [2024-04-26 13:35:14.111088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.755 [2024-04-26 13:35:14.111095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.755 [2024-04-26 13:35:14.111102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.755 [2024-04-26 13:35:14.111116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111121] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111126] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.755 [2024-04-26 13:35:14.111136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.755 [2024-04-26 13:35:14.111163] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.755 [2024-04-26 13:35:14.111232] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.755 [2024-04-26 13:35:14.111239] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.755 [2024-04-26 13:35:14.111243] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111247] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.755 [2024-04-26 13:35:14.111263] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111269] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.755 [2024-04-26 13:35:14.111281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.755 [2024-04-26 13:35:14.111305] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.755 [2024-04-26 13:35:14.111395] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.755 [2024-04-26 13:35:14.111402] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.755 [2024-04-26 13:35:14.111406] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111410] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.755 [2024-04-26 13:35:14.111417] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:56.755 [2024-04-26 13:35:14.111422] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:56.755 [2024-04-26 13:35:14.111432] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111436] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111440] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.755 [2024-04-26 13:35:14.111448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.755 [2024-04-26 13:35:14.111466] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.755 [2024-04-26 13:35:14.111531] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.755 [2024-04-26 13:35:14.111538] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.755 [2024-04-26 13:35:14.111542] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.755 [2024-04-26 13:35:14.111558] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111563] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111567] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.755 [2024-04-26 13:35:14.111574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.755 [2024-04-26 13:35:14.111592] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.755 [2024-04-26 13:35:14.111648] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.755 [2024-04-26 13:35:14.111654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.755 [2024-04-26 13:35:14.111658] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111662] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.755 [2024-04-26 13:35:14.111673] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111678] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.755 [2024-04-26 13:35:14.111689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.755 [2024-04-26 13:35:14.111707] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.755 [2024-04-26 13:35:14.111764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.755 [2024-04-26 13:35:14.111771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.755 [2024-04-26 13:35:14.111775] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111792] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.755 [2024-04-26 13:35:14.111805] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111810] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111813] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.755 [2024-04-26 13:35:14.111822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.755 [2024-04-26 13:35:14.111842] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.755 [2024-04-26 13:35:14.111900] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.755 [2024-04-26 13:35:14.111907] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.755 [2024-04-26 13:35:14.111911] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111915] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.755 [2024-04-26 13:35:14.111926] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.755 [2024-04-26 13:35:14.111934] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.755 [2024-04-26 13:35:14.111942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.755 [2024-04-26 13:35:14.111960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.755 [2024-04-26 13:35:14.112020] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.755 [2024-04-26 13:35:14.112027] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.755 [2024-04-26 13:35:14.112031] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112035] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.112045] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112050] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112054] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.756 [2024-04-26 13:35:14.112061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.756 [2024-04-26 13:35:14.112079] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.756 [2024-04-26 13:35:14.112137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.756 [2024-04-26 13:35:14.112144] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.756 [2024-04-26 13:35:14.112148] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112152] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.112163] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112167] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112171] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.756 [2024-04-26 13:35:14.112179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.756 [2024-04-26 13:35:14.112196] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.756 [2024-04-26 13:35:14.112260] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.756 [2024-04-26 13:35:14.112267] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.756 [2024-04-26 13:35:14.112271] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112275] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.112286] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112291] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112295] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.756 [2024-04-26 13:35:14.112302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.756 [2024-04-26 13:35:14.112320] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.756 [2024-04-26 13:35:14.112381] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.756 [2024-04-26 13:35:14.112388] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.756 [2024-04-26 13:35:14.112392] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112396] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.112406] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112411] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112415] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.756 [2024-04-26 13:35:14.112422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.756 [2024-04-26 13:35:14.112440] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.756 [2024-04-26 13:35:14.112498] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.756 [2024-04-26 13:35:14.112505] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.756 [2024-04-26 13:35:14.112509] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112513] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.112523] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112528] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112532] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.756 [2024-04-26 13:35:14.112539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.756 [2024-04-26 13:35:14.112557] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.756 [2024-04-26 13:35:14.112613] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.756 [2024-04-26 13:35:14.112619] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.756 [2024-04-26 13:35:14.112623] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112627] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.112638] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112643] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112647] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.756 [2024-04-26 13:35:14.112654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.756 [2024-04-26 13:35:14.112671] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.756 [2024-04-26 13:35:14.112733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.756 [2024-04-26 13:35:14.112740] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.756 [2024-04-26 13:35:14.112744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112748] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.112759] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.112768] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8e5280) 00:24:56.756 [2024-04-26 13:35:14.112776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.756 [2024-04-26 13:35:14.116826] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x92dd60, cid 3, qid 0 00:24:56.756 [2024-04-26 13:35:14.116892] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:56.756 [2024-04-26 13:35:14.116900] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:56.756 [2024-04-26 13:35:14.116904] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:56.756 [2024-04-26 13:35:14.116908] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x92dd60) on tqpair=0x8e5280 00:24:56.756 [2024-04-26 13:35:14.116918] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:56.756 00:24:56.756 13:35:14 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:56.756 [2024-04-26 13:35:14.156703] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:56.756 [2024-04-26 13:35:14.156758] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80583 ] 00:24:57.019 [2024-04-26 13:35:14.298148] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:57.019 [2024-04-26 13:35:14.298247] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:57.019 [2024-04-26 13:35:14.298256] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:57.019 [2024-04-26 13:35:14.298274] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:57.019 [2024-04-26 13:35:14.298298] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:57.019 [2024-04-26 13:35:14.298508] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:57.019 [2024-04-26 13:35:14.298566] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22bb280 0 00:24:57.019 [2024-04-26 13:35:14.302797] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:57.019 [2024-04-26 13:35:14.302822] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:57.019 [2024-04-26 13:35:14.302829] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:57.019 [2024-04-26 13:35:14.302833] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:57.019 [2024-04-26 13:35:14.302888] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.302896] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.302900] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.019 [2024-04-26 13:35:14.302918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:57.019 [2024-04-26 13:35:14.302951] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.019 [2024-04-26 13:35:14.310810] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.019 [2024-04-26 13:35:14.310842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.019 [2024-04-26 13:35:14.310849] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.310855] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.019 [2024-04-26 13:35:14.310878] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:57.019 [2024-04-26 13:35:14.310891] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:57.019 [2024-04-26 13:35:14.310900] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:57.019 [2024-04-26 13:35:14.310925] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.310931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.310936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.019 [2024-04-26 13:35:14.310950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.019 [2024-04-26 13:35:14.310987] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.019 [2024-04-26 13:35:14.311077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.019 [2024-04-26 13:35:14.311084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.019 [2024-04-26 13:35:14.311088] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311092] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.019 [2024-04-26 13:35:14.311104] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:57.019 [2024-04-26 13:35:14.311113] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:57.019 [2024-04-26 13:35:14.311121] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311129] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.019 [2024-04-26 13:35:14.311137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.019 [2024-04-26 13:35:14.311157] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.019 [2024-04-26 13:35:14.311225] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.019 [2024-04-26 13:35:14.311233] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.019 [2024-04-26 13:35:14.311236] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311241] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.019 [2024-04-26 13:35:14.311248] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:57.019 [2024-04-26 13:35:14.311258] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:57.019 [2024-04-26 13:35:14.311266] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311270] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311274] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.019 [2024-04-26 13:35:14.311281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.019 [2024-04-26 13:35:14.311299] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.019 [2024-04-26 13:35:14.311359] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.019 [2024-04-26 13:35:14.311366] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.019 [2024-04-26 13:35:14.311370] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311374] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.019 [2024-04-26 13:35:14.311382] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:57.019 [2024-04-26 13:35:14.311393] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311401] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.019 [2024-04-26 13:35:14.311409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.019 [2024-04-26 13:35:14.311427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.019 [2024-04-26 13:35:14.311484] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.019 [2024-04-26 13:35:14.311491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.019 [2024-04-26 13:35:14.311495] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311499] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.019 [2024-04-26 13:35:14.311505] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:57.019 [2024-04-26 13:35:14.311511] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:57.019 [2024-04-26 13:35:14.311519] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:57.019 [2024-04-26 13:35:14.311626] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:57.019 [2024-04-26 13:35:14.311639] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:57.019 [2024-04-26 13:35:14.311652] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311656] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311660] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.019 [2024-04-26 13:35:14.311668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.019 [2024-04-26 13:35:14.311689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.019 [2024-04-26 13:35:14.311752] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.019 [2024-04-26 13:35:14.311759] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.019 [2024-04-26 13:35:14.311763] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311767] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.019 [2024-04-26 13:35:14.311774] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:57.019 [2024-04-26 13:35:14.311798] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311804] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.019 [2024-04-26 13:35:14.311808] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.019 [2024-04-26 13:35:14.311815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.019 [2024-04-26 13:35:14.311836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.019 [2024-04-26 13:35:14.312346] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.020 [2024-04-26 13:35:14.312360] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.020 [2024-04-26 13:35:14.312365] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312369] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.020 [2024-04-26 13:35:14.312376] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:57.020 [2024-04-26 13:35:14.312381] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.312391] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:57.020 [2024-04-26 13:35:14.312402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.312418] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312422] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.312430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.020 [2024-04-26 13:35:14.312451] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.020 [2024-04-26 13:35:14.312603] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.020 [2024-04-26 13:35:14.312610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.020 [2024-04-26 13:35:14.312614] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312619] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=4096, cccid=0 00:24:57.020 [2024-04-26 13:35:14.312624] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2303940) on tqpair(0x22bb280): expected_datao=0, payload_size=4096 00:24:57.020 [2024-04-26 13:35:14.312629] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312639] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312644] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312654] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.020 [2024-04-26 13:35:14.312660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.020 [2024-04-26 13:35:14.312663] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312667] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.020 [2024-04-26 13:35:14.312679] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:57.020 [2024-04-26 13:35:14.312685] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:57.020 [2024-04-26 13:35:14.312690] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:57.020 [2024-04-26 13:35:14.312700] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:57.020 [2024-04-26 13:35:14.312706] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:57.020 [2024-04-26 13:35:14.312711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.312721] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.312730] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312734] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312738] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.312747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:57.020 [2024-04-26 13:35:14.312776] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.020 [2024-04-26 13:35:14.312866] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.020 [2024-04-26 13:35:14.312873] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.020 [2024-04-26 13:35:14.312877] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312881] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303940) on tqpair=0x22bb280 00:24:57.020 [2024-04-26 13:35:14.312891] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312895] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.312907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.020 [2024-04-26 13:35:14.312914] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312918] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312922] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.312928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.020 [2024-04-26 13:35:14.312935] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312939] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312943] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.312949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.020 [2024-04-26 13:35:14.312956] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312960] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.312964] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.312970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.020 [2024-04-26 13:35:14.312975] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.312989] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.312997] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313001] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.313008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.020 [2024-04-26 13:35:14.313032] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303940, cid 0, qid 0 00:24:57.020 [2024-04-26 13:35:14.313039] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303aa0, cid 1, qid 0 00:24:57.020 [2024-04-26 13:35:14.313044] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303c00, cid 2, qid 0 00:24:57.020 [2024-04-26 13:35:14.313050] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.020 [2024-04-26 13:35:14.313055] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303ec0, cid 4, qid 0 00:24:57.020 [2024-04-26 13:35:14.313162] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.020 [2024-04-26 13:35:14.313169] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.020 [2024-04-26 13:35:14.313173] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313177] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303ec0) on tqpair=0x22bb280 00:24:57.020 [2024-04-26 13:35:14.313185] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:57.020 [2024-04-26 13:35:14.313190] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.313199] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.313207] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.313214] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313218] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313222] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.313229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:57.020 [2024-04-26 13:35:14.313253] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303ec0, cid 4, qid 0 00:24:57.020 [2024-04-26 13:35:14.313313] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.020 [2024-04-26 13:35:14.313319] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.020 [2024-04-26 13:35:14.313323] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313327] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303ec0) on tqpair=0x22bb280 00:24:57.020 [2024-04-26 13:35:14.313389] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.313401] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:57.020 [2024-04-26 13:35:14.313410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313414] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22bb280) 00:24:57.020 [2024-04-26 13:35:14.313422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.020 [2024-04-26 13:35:14.313442] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303ec0, cid 4, qid 0 00:24:57.020 [2024-04-26 13:35:14.313514] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.020 [2024-04-26 13:35:14.313526] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.020 [2024-04-26 13:35:14.313531] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313535] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=4096, cccid=4 00:24:57.020 [2024-04-26 13:35:14.313540] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2303ec0) on tqpair(0x22bb280): expected_datao=0, payload_size=4096 00:24:57.020 [2024-04-26 13:35:14.313545] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313553] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313557] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.020 [2024-04-26 13:35:14.313566] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.313572] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.313576] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313580] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303ec0) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.313594] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:57.021 [2024-04-26 13:35:14.313609] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.313620] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.313628] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.313640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.313661] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303ec0, cid 4, qid 0 00:24:57.021 [2024-04-26 13:35:14.313800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.021 [2024-04-26 13:35:14.313809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.021 [2024-04-26 13:35:14.313813] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313817] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=4096, cccid=4 00:24:57.021 [2024-04-26 13:35:14.313822] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2303ec0) on tqpair(0x22bb280): expected_datao=0, payload_size=4096 00:24:57.021 [2024-04-26 13:35:14.313826] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313834] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313839] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.313853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.313857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313861] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303ec0) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.313884] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.313896] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.313905] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.313910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.313917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.313939] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303ec0, cid 4, qid 0 00:24:57.021 [2024-04-26 13:35:14.314022] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.021 [2024-04-26 13:35:14.314029] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.021 [2024-04-26 13:35:14.314033] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314037] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=4096, cccid=4 00:24:57.021 [2024-04-26 13:35:14.314042] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2303ec0) on tqpair(0x22bb280): expected_datao=0, payload_size=4096 00:24:57.021 [2024-04-26 13:35:14.314047] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314054] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314058] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314066] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.314073] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.314076] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314081] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303ec0) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.314098] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.314107] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.314119] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.314126] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.314132] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.314138] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:57.021 [2024-04-26 13:35:14.314143] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:57.021 [2024-04-26 13:35:14.314149] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:57.021 [2024-04-26 13:35:14.314170] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314175] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.314190] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314198] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.021 [2024-04-26 13:35:14.314232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303ec0, cid 4, qid 0 00:24:57.021 [2024-04-26 13:35:14.314240] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2304020, cid 5, qid 0 00:24:57.021 [2024-04-26 13:35:14.314341] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.314354] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.314359] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314363] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303ec0) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.314371] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.314378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.314381] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314385] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2304020) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.314397] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314402] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.314429] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2304020, cid 5, qid 0 00:24:57.021 [2024-04-26 13:35:14.314491] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.314498] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.314502] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314506] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2304020) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.314517] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314522] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.314547] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2304020, cid 5, qid 0 00:24:57.021 [2024-04-26 13:35:14.314608] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.314615] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.314618] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314622] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2304020) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.314644] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314649] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.314673] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2304020, cid 5, qid 0 00:24:57.021 [2024-04-26 13:35:14.314730] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.021 [2024-04-26 13:35:14.314738] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.021 [2024-04-26 13:35:14.314742] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314747] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2304020) on tqpair=0x22bb280 00:24:57.021 [2024-04-26 13:35:14.314762] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314767] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.314800] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314805] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.314821] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.021 [2024-04-26 13:35:14.314825] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x22bb280) 00:24:57.021 [2024-04-26 13:35:14.314831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.021 [2024-04-26 13:35:14.314840] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.314844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22bb280) 00:24:57.022 [2024-04-26 13:35:14.314850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.022 [2024-04-26 13:35:14.314873] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2304020, cid 5, qid 0 00:24:57.022 [2024-04-26 13:35:14.314880] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303ec0, cid 4, qid 0 00:24:57.022 [2024-04-26 13:35:14.314885] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2304180, cid 6, qid 0 00:24:57.022 [2024-04-26 13:35:14.314890] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23042e0, cid 7, qid 0 00:24:57.022 [2024-04-26 13:35:14.315045] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.022 [2024-04-26 13:35:14.315052] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.022 [2024-04-26 13:35:14.315056] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315060] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=8192, cccid=5 00:24:57.022 [2024-04-26 13:35:14.315065] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2304020) on tqpair(0x22bb280): expected_datao=0, payload_size=8192 00:24:57.022 [2024-04-26 13:35:14.315070] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315087] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315092] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.022 [2024-04-26 13:35:14.315104] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.022 [2024-04-26 13:35:14.315107] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315111] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=512, cccid=4 00:24:57.022 [2024-04-26 13:35:14.315116] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2303ec0) on tqpair(0x22bb280): expected_datao=0, payload_size=512 00:24:57.022 [2024-04-26 13:35:14.315120] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315127] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315131] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315136] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.022 [2024-04-26 13:35:14.315142] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.022 [2024-04-26 13:35:14.315146] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315150] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=512, cccid=6 00:24:57.022 [2024-04-26 13:35:14.315154] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2304180) on tqpair(0x22bb280): expected_datao=0, payload_size=512 00:24:57.022 [2024-04-26 13:35:14.315159] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315165] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315169] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315175] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.022 [2024-04-26 13:35:14.315180] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.022 [2024-04-26 13:35:14.315184] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315188] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22bb280): datao=0, datal=4096, cccid=7 00:24:57.022 [2024-04-26 13:35:14.315192] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x23042e0) on tqpair(0x22bb280): expected_datao=0, payload_size=4096 00:24:57.022 [2024-04-26 13:35:14.315197] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315204] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315208] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315225] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.022 [2024-04-26 13:35:14.315231] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.022 [2024-04-26 13:35:14.315235] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315239] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2304020) on tqpair=0x22bb280 00:24:57.022 [2024-04-26 13:35:14.315260] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.022 [2024-04-26 13:35:14.315267] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.022 [2024-04-26 13:35:14.315270] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315280] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303ec0) on tqpair=0x22bb280 00:24:57.022 [2024-04-26 13:35:14.315292] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.022 [2024-04-26 13:35:14.315299] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.022 [2024-04-26 13:35:14.315303] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315307] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2304180) on tqpair=0x22bb280 00:24:57.022 [2024-04-26 13:35:14.315316] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.022 [2024-04-26 13:35:14.315325] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.022 [2024-04-26 13:35:14.315329] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.022 [2024-04-26 13:35:14.315333] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23042e0) on tqpair=0x22bb280 00:24:57.022 ===================================================== 00:24:57.022 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:57.022 ===================================================== 00:24:57.022 Controller Capabilities/Features 00:24:57.022 ================================ 00:24:57.022 Vendor ID: 8086 00:24:57.022 Subsystem Vendor ID: 8086 00:24:57.022 Serial Number: SPDK00000000000001 00:24:57.022 Model Number: SPDK bdev Controller 00:24:57.022 Firmware Version: 24.05 00:24:57.022 Recommended Arb Burst: 6 00:24:57.022 IEEE OUI Identifier: e4 d2 5c 00:24:57.022 Multi-path I/O 00:24:57.022 May have multiple subsystem ports: Yes 00:24:57.022 May have multiple controllers: Yes 00:24:57.022 Associated with SR-IOV VF: No 00:24:57.022 Max Data Transfer Size: 131072 00:24:57.022 Max Number of Namespaces: 32 00:24:57.022 Max Number of I/O Queues: 127 00:24:57.022 NVMe Specification Version (VS): 1.3 00:24:57.022 NVMe Specification Version (Identify): 1.3 00:24:57.022 Maximum Queue Entries: 128 00:24:57.022 Contiguous Queues Required: Yes 00:24:57.022 Arbitration Mechanisms Supported 00:24:57.022 Weighted Round Robin: Not Supported 00:24:57.022 Vendor Specific: Not Supported 00:24:57.022 Reset Timeout: 15000 ms 00:24:57.022 Doorbell Stride: 4 bytes 00:24:57.022 NVM Subsystem Reset: Not Supported 00:24:57.022 Command Sets Supported 00:24:57.022 NVM Command Set: Supported 00:24:57.022 Boot Partition: Not Supported 00:24:57.022 Memory Page Size Minimum: 4096 bytes 00:24:57.022 Memory Page Size Maximum: 4096 bytes 00:24:57.022 Persistent Memory Region: Not Supported 00:24:57.022 Optional Asynchronous Events Supported 00:24:57.022 Namespace Attribute Notices: Supported 00:24:57.022 Firmware Activation Notices: Not Supported 00:24:57.022 ANA Change Notices: Not Supported 00:24:57.022 PLE Aggregate Log Change Notices: Not Supported 00:24:57.022 LBA Status Info Alert Notices: Not Supported 00:24:57.022 EGE Aggregate Log Change Notices: Not Supported 00:24:57.022 Normal NVM Subsystem Shutdown event: Not Supported 00:24:57.022 Zone Descriptor Change Notices: Not Supported 00:24:57.022 Discovery Log Change Notices: Not Supported 00:24:57.022 Controller Attributes 00:24:57.022 128-bit Host Identifier: Supported 00:24:57.022 Non-Operational Permissive Mode: Not Supported 00:24:57.022 NVM Sets: Not Supported 00:24:57.022 Read Recovery Levels: Not Supported 00:24:57.022 Endurance Groups: Not Supported 00:24:57.022 Predictable Latency Mode: Not Supported 00:24:57.022 Traffic Based Keep ALive: Not Supported 00:24:57.022 Namespace Granularity: Not Supported 00:24:57.022 SQ Associations: Not Supported 00:24:57.022 UUID List: Not Supported 00:24:57.022 Multi-Domain Subsystem: Not Supported 00:24:57.022 Fixed Capacity Management: Not Supported 00:24:57.022 Variable Capacity Management: Not Supported 00:24:57.022 Delete Endurance Group: Not Supported 00:24:57.022 Delete NVM Set: Not Supported 00:24:57.022 Extended LBA Formats Supported: Not Supported 00:24:57.022 Flexible Data Placement Supported: Not Supported 00:24:57.022 00:24:57.022 Controller Memory Buffer Support 00:24:57.022 ================================ 00:24:57.022 Supported: No 00:24:57.022 00:24:57.022 Persistent Memory Region Support 00:24:57.022 ================================ 00:24:57.022 Supported: No 00:24:57.022 00:24:57.022 Admin Command Set Attributes 00:24:57.022 ============================ 00:24:57.022 Security Send/Receive: Not Supported 00:24:57.022 Format NVM: Not Supported 00:24:57.022 Firmware Activate/Download: Not Supported 00:24:57.022 Namespace Management: Not Supported 00:24:57.022 Device Self-Test: Not Supported 00:24:57.022 Directives: Not Supported 00:24:57.022 NVMe-MI: Not Supported 00:24:57.022 Virtualization Management: Not Supported 00:24:57.022 Doorbell Buffer Config: Not Supported 00:24:57.022 Get LBA Status Capability: Not Supported 00:24:57.022 Command & Feature Lockdown Capability: Not Supported 00:24:57.022 Abort Command Limit: 4 00:24:57.022 Async Event Request Limit: 4 00:24:57.022 Number of Firmware Slots: N/A 00:24:57.022 Firmware Slot 1 Read-Only: N/A 00:24:57.022 Firmware Activation Without Reset: N/A 00:24:57.022 Multiple Update Detection Support: N/A 00:24:57.022 Firmware Update Granularity: No Information Provided 00:24:57.022 Per-Namespace SMART Log: No 00:24:57.023 Asymmetric Namespace Access Log Page: Not Supported 00:24:57.023 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:57.023 Command Effects Log Page: Supported 00:24:57.023 Get Log Page Extended Data: Supported 00:24:57.023 Telemetry Log Pages: Not Supported 00:24:57.023 Persistent Event Log Pages: Not Supported 00:24:57.023 Supported Log Pages Log Page: May Support 00:24:57.023 Commands Supported & Effects Log Page: Not Supported 00:24:57.023 Feature Identifiers & Effects Log Page:May Support 00:24:57.023 NVMe-MI Commands & Effects Log Page: May Support 00:24:57.023 Data Area 4 for Telemetry Log: Not Supported 00:24:57.023 Error Log Page Entries Supported: 128 00:24:57.023 Keep Alive: Supported 00:24:57.023 Keep Alive Granularity: 10000 ms 00:24:57.023 00:24:57.023 NVM Command Set Attributes 00:24:57.023 ========================== 00:24:57.023 Submission Queue Entry Size 00:24:57.023 Max: 64 00:24:57.023 Min: 64 00:24:57.023 Completion Queue Entry Size 00:24:57.023 Max: 16 00:24:57.023 Min: 16 00:24:57.023 Number of Namespaces: 32 00:24:57.023 Compare Command: Supported 00:24:57.023 Write Uncorrectable Command: Not Supported 00:24:57.023 Dataset Management Command: Supported 00:24:57.023 Write Zeroes Command: Supported 00:24:57.023 Set Features Save Field: Not Supported 00:24:57.023 Reservations: Supported 00:24:57.023 Timestamp: Not Supported 00:24:57.023 Copy: Supported 00:24:57.023 Volatile Write Cache: Present 00:24:57.023 Atomic Write Unit (Normal): 1 00:24:57.023 Atomic Write Unit (PFail): 1 00:24:57.023 Atomic Compare & Write Unit: 1 00:24:57.023 Fused Compare & Write: Supported 00:24:57.023 Scatter-Gather List 00:24:57.023 SGL Command Set: Supported 00:24:57.023 SGL Keyed: Supported 00:24:57.023 SGL Bit Bucket Descriptor: Not Supported 00:24:57.023 SGL Metadata Pointer: Not Supported 00:24:57.023 Oversized SGL: Not Supported 00:24:57.023 SGL Metadata Address: Not Supported 00:24:57.023 SGL Offset: Supported 00:24:57.023 Transport SGL Data Block: Not Supported 00:24:57.023 Replay Protected Memory Block: Not Supported 00:24:57.023 00:24:57.023 Firmware Slot Information 00:24:57.023 ========================= 00:24:57.023 Active slot: 1 00:24:57.023 Slot 1 Firmware Revision: 24.05 00:24:57.023 00:24:57.023 00:24:57.023 Commands Supported and Effects 00:24:57.023 ============================== 00:24:57.023 Admin Commands 00:24:57.023 -------------- 00:24:57.023 Get Log Page (02h): Supported 00:24:57.023 Identify (06h): Supported 00:24:57.023 Abort (08h): Supported 00:24:57.023 Set Features (09h): Supported 00:24:57.023 Get Features (0Ah): Supported 00:24:57.023 Asynchronous Event Request (0Ch): Supported 00:24:57.023 Keep Alive (18h): Supported 00:24:57.023 I/O Commands 00:24:57.023 ------------ 00:24:57.023 Flush (00h): Supported LBA-Change 00:24:57.023 Write (01h): Supported LBA-Change 00:24:57.023 Read (02h): Supported 00:24:57.023 Compare (05h): Supported 00:24:57.023 Write Zeroes (08h): Supported LBA-Change 00:24:57.023 Dataset Management (09h): Supported LBA-Change 00:24:57.023 Copy (19h): Supported LBA-Change 00:24:57.023 Unknown (79h): Supported LBA-Change 00:24:57.023 Unknown (7Ah): Supported 00:24:57.023 00:24:57.023 Error Log 00:24:57.023 ========= 00:24:57.023 00:24:57.023 Arbitration 00:24:57.023 =========== 00:24:57.023 Arbitration Burst: 1 00:24:57.023 00:24:57.023 Power Management 00:24:57.023 ================ 00:24:57.023 Number of Power States: 1 00:24:57.023 Current Power State: Power State #0 00:24:57.023 Power State #0: 00:24:57.023 Max Power: 0.00 W 00:24:57.023 Non-Operational State: Operational 00:24:57.023 Entry Latency: Not Reported 00:24:57.023 Exit Latency: Not Reported 00:24:57.023 Relative Read Throughput: 0 00:24:57.023 Relative Read Latency: 0 00:24:57.023 Relative Write Throughput: 0 00:24:57.023 Relative Write Latency: 0 00:24:57.023 Idle Power: Not Reported 00:24:57.023 Active Power: Not Reported 00:24:57.023 Non-Operational Permissive Mode: Not Supported 00:24:57.023 00:24:57.023 Health Information 00:24:57.023 ================== 00:24:57.023 Critical Warnings: 00:24:57.023 Available Spare Space: OK 00:24:57.023 Temperature: OK 00:24:57.023 Device Reliability: OK 00:24:57.023 Read Only: No 00:24:57.023 Volatile Memory Backup: OK 00:24:57.023 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:57.023 Temperature Threshold: [2024-04-26 13:35:14.315463] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315470] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x22bb280) 00:24:57.023 [2024-04-26 13:35:14.315478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.023 [2024-04-26 13:35:14.315502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23042e0, cid 7, qid 0 00:24:57.023 [2024-04-26 13:35:14.315574] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.023 [2024-04-26 13:35:14.315581] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.023 [2024-04-26 13:35:14.315585] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315589] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x23042e0) on tqpair=0x22bb280 00:24:57.023 [2024-04-26 13:35:14.315628] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:57.023 [2024-04-26 13:35:14.315644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.023 [2024-04-26 13:35:14.315652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.023 [2024-04-26 13:35:14.315664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.023 [2024-04-26 13:35:14.315670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.023 [2024-04-26 13:35:14.315680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.023 [2024-04-26 13:35:14.315697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.023 [2024-04-26 13:35:14.315719] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.023 [2024-04-26 13:35:14.315791] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.023 [2024-04-26 13:35:14.315800] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.023 [2024-04-26 13:35:14.315804] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315808] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.023 [2024-04-26 13:35:14.315818] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315822] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315826] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.023 [2024-04-26 13:35:14.315834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.023 [2024-04-26 13:35:14.315858] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.023 [2024-04-26 13:35:14.315943] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.023 [2024-04-26 13:35:14.315950] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.023 [2024-04-26 13:35:14.315954] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.023 [2024-04-26 13:35:14.315958] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.315964] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:57.024 [2024-04-26 13:35:14.315970] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:57.024 [2024-04-26 13:35:14.315980] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.315984] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.315988] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.315995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316014] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316086] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316102] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316107] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316111] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316135] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316192] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316199] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316203] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316207] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316218] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316222] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316226] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316251] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316310] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316316] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316320] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316324] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316336] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316345] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316369] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316425] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316431] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316435] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316439] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316450] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316455] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316459] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316484] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316541] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316548] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316552] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316556] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316567] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316572] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316576] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316600] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316658] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316669] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316673] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316684] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316689] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316692] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316794] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316797] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316802] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316824] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316829] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316833] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316860] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.316917] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.316924] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.316928] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316932] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.316943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316948] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.316951] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.316959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.316976] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.317034] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.317041] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.317045] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317049] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.317060] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317065] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317069] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.317076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.317093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.317154] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.317166] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.317171] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317175] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.317187] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317192] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317196] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.024 [2024-04-26 13:35:14.317203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.024 [2024-04-26 13:35:14.317222] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.024 [2024-04-26 13:35:14.317281] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.024 [2024-04-26 13:35:14.317288] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.024 [2024-04-26 13:35:14.317292] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317296] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.024 [2024-04-26 13:35:14.317307] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.024 [2024-04-26 13:35:14.317321] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.317328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.317345] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.317411] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.317419] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.317422] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317427] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.317438] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317443] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317446] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.317454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.317471] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.317532] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.317544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.317548] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317552] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.317565] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317569] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317573] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.317581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.317600] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.317658] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.317674] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.317679] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317683] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.317696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317701] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.317712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.317731] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.317818] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.317830] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.317835] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317839] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.317851] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.317868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.317898] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.317957] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.317964] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.317968] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317972] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.317983] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317988] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.317992] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.318000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.318017] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.318075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.318082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.318085] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.318101] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318105] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318109] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.318116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.318134] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.318189] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.318196] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.318200] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318204] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.318215] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318220] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318224] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.318231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.318249] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.318334] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.318342] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.318346] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318350] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.318361] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318366] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318370] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.318377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.318396] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.318458] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.318474] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.318477] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318482] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.318493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318497] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318501] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.318509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.318526] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.318581] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.318588] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.318592] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318596] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.318607] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318611] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318615] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.318622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.318640] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.318695] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.318702] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.025 [2024-04-26 13:35:14.318706] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318710] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.025 [2024-04-26 13:35:14.318721] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318725] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.025 [2024-04-26 13:35:14.318729] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.025 [2024-04-26 13:35:14.318736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.025 [2024-04-26 13:35:14.318753] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.025 [2024-04-26 13:35:14.322799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.025 [2024-04-26 13:35:14.322820] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.026 [2024-04-26 13:35:14.322826] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.026 [2024-04-26 13:35:14.322830] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.026 [2024-04-26 13:35:14.322848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.026 [2024-04-26 13:35:14.322853] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.026 [2024-04-26 13:35:14.322857] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22bb280) 00:24:57.026 [2024-04-26 13:35:14.322867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.026 [2024-04-26 13:35:14.322893] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2303d60, cid 3, qid 0 00:24:57.026 [2024-04-26 13:35:14.322962] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.026 [2024-04-26 13:35:14.322968] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.026 [2024-04-26 13:35:14.322972] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.026 [2024-04-26 13:35:14.322976] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2303d60) on tqpair=0x22bb280 00:24:57.026 [2024-04-26 13:35:14.322986] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:57.026 0 Kelvin (-273 Celsius) 00:24:57.026 Available Spare: 0% 00:24:57.026 Available Spare Threshold: 0% 00:24:57.026 Life Percentage Used: 0% 00:24:57.026 Data Units Read: 0 00:24:57.026 Data Units Written: 0 00:24:57.026 Host Read Commands: 0 00:24:57.026 Host Write Commands: 0 00:24:57.026 Controller Busy Time: 0 minutes 00:24:57.026 Power Cycles: 0 00:24:57.026 Power On Hours: 0 hours 00:24:57.026 Unsafe Shutdowns: 0 00:24:57.026 Unrecoverable Media Errors: 0 00:24:57.026 Lifetime Error Log Entries: 0 00:24:57.026 Warning Temperature Time: 0 minutes 00:24:57.026 Critical Temperature Time: 0 minutes 00:24:57.026 00:24:57.026 Number of Queues 00:24:57.026 ================ 00:24:57.026 Number of I/O Submission Queues: 127 00:24:57.026 Number of I/O Completion Queues: 127 00:24:57.026 00:24:57.026 Active Namespaces 00:24:57.026 ================= 00:24:57.026 Namespace ID:1 00:24:57.026 Error Recovery Timeout: Unlimited 00:24:57.026 Command Set Identifier: NVM (00h) 00:24:57.026 Deallocate: Supported 00:24:57.026 Deallocated/Unwritten Error: Not Supported 00:24:57.026 Deallocated Read Value: Unknown 00:24:57.026 Deallocate in Write Zeroes: Not Supported 00:24:57.026 Deallocated Guard Field: 0xFFFF 00:24:57.026 Flush: Supported 00:24:57.026 Reservation: Supported 00:24:57.026 Namespace Sharing Capabilities: Multiple Controllers 00:24:57.026 Size (in LBAs): 131072 (0GiB) 00:24:57.026 Capacity (in LBAs): 131072 (0GiB) 00:24:57.026 Utilization (in LBAs): 131072 (0GiB) 00:24:57.026 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:57.026 EUI64: ABCDEF0123456789 00:24:57.026 UUID: 9392e5fb-3a7e-4f40-83d5-fb1e2dadd090 00:24:57.026 Thin Provisioning: Not Supported 00:24:57.026 Per-NS Atomic Units: Yes 00:24:57.026 Atomic Boundary Size (Normal): 0 00:24:57.026 Atomic Boundary Size (PFail): 0 00:24:57.026 Atomic Boundary Offset: 0 00:24:57.026 Maximum Single Source Range Length: 65535 00:24:57.026 Maximum Copy Length: 65535 00:24:57.026 Maximum Source Range Count: 1 00:24:57.026 NGUID/EUI64 Never Reused: No 00:24:57.026 Namespace Write Protected: No 00:24:57.026 Number of LBA Formats: 1 00:24:57.026 Current LBA Format: LBA Format #00 00:24:57.026 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:57.026 00:24:57.026 13:35:14 -- host/identify.sh@51 -- # sync 00:24:57.026 13:35:14 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.026 13:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:57.026 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:57.026 13:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:57.026 13:35:14 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:57.026 13:35:14 -- host/identify.sh@56 -- # nvmftestfini 00:24:57.026 13:35:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:57.026 13:35:14 -- nvmf/common.sh@117 -- # sync 00:24:57.026 13:35:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.026 13:35:14 -- nvmf/common.sh@120 -- # set +e 00:24:57.026 13:35:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.026 13:35:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.026 rmmod nvme_tcp 00:24:57.026 rmmod nvme_fabrics 00:24:57.026 rmmod nvme_keyring 00:24:57.026 13:35:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.026 13:35:14 -- nvmf/common.sh@124 -- # set -e 00:24:57.026 13:35:14 -- nvmf/common.sh@125 -- # return 0 00:24:57.026 13:35:14 -- nvmf/common.sh@478 -- # '[' -n 80521 ']' 00:24:57.026 13:35:14 -- nvmf/common.sh@479 -- # killprocess 80521 00:24:57.026 13:35:14 -- common/autotest_common.sh@936 -- # '[' -z 80521 ']' 00:24:57.026 13:35:14 -- common/autotest_common.sh@940 -- # kill -0 80521 00:24:57.026 13:35:14 -- common/autotest_common.sh@941 -- # uname 00:24:57.284 13:35:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:57.284 13:35:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80521 00:24:57.284 13:35:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:57.284 13:35:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:57.284 killing process with pid 80521 00:24:57.284 13:35:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80521' 00:24:57.284 13:35:14 -- common/autotest_common.sh@955 -- # kill 80521 00:24:57.284 [2024-04-26 13:35:14.485123] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:57.284 13:35:14 -- common/autotest_common.sh@960 -- # wait 80521 00:24:57.543 13:35:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:57.543 13:35:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:57.543 13:35:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:57.543 13:35:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.543 13:35:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:57.543 13:35:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.543 13:35:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.543 13:35:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.543 13:35:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:57.543 00:24:57.543 real 0m2.729s 00:24:57.543 user 0m7.351s 00:24:57.543 sys 0m0.745s 00:24:57.543 13:35:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:57.543 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:57.543 ************************************ 00:24:57.543 END TEST nvmf_identify 00:24:57.543 ************************************ 00:24:57.543 13:35:14 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:57.543 13:35:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:57.543 13:35:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.543 13:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:57.543 ************************************ 00:24:57.543 START TEST nvmf_perf 00:24:57.543 ************************************ 00:24:57.543 13:35:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:57.802 * Looking for test storage... 00:24:57.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:57.802 13:35:15 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:57.802 13:35:15 -- nvmf/common.sh@7 -- # uname -s 00:24:57.802 13:35:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.802 13:35:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.802 13:35:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.802 13:35:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.802 13:35:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.802 13:35:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.802 13:35:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.802 13:35:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.803 13:35:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.803 13:35:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.803 13:35:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:57.803 13:35:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:24:57.803 13:35:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.803 13:35:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.803 13:35:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:57.803 13:35:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.803 13:35:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:57.803 13:35:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.803 13:35:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.803 13:35:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.803 13:35:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.803 13:35:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.803 13:35:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.803 13:35:15 -- paths/export.sh@5 -- # export PATH 00:24:57.803 13:35:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.803 13:35:15 -- nvmf/common.sh@47 -- # : 0 00:24:57.803 13:35:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.803 13:35:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.803 13:35:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.803 13:35:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.803 13:35:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.803 13:35:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.803 13:35:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.803 13:35:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.803 13:35:15 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:57.803 13:35:15 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:57.803 13:35:15 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.803 13:35:15 -- host/perf.sh@17 -- # nvmftestinit 00:24:57.803 13:35:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:57.803 13:35:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.803 13:35:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:57.803 13:35:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:57.803 13:35:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:57.803 13:35:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.803 13:35:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.803 13:35:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.803 13:35:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:57.803 13:35:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:57.803 13:35:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:57.803 13:35:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:57.803 13:35:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:57.803 13:35:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:57.803 13:35:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.803 13:35:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.803 13:35:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:57.803 13:35:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:57.803 13:35:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:57.803 13:35:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:57.803 13:35:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:57.803 13:35:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.803 13:35:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:57.803 13:35:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:57.803 13:35:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:57.803 13:35:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:57.803 13:35:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:57.803 13:35:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:57.803 Cannot find device "nvmf_tgt_br" 00:24:57.803 13:35:15 -- nvmf/common.sh@155 -- # true 00:24:57.803 13:35:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:57.803 Cannot find device "nvmf_tgt_br2" 00:24:57.803 13:35:15 -- nvmf/common.sh@156 -- # true 00:24:57.803 13:35:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:57.803 13:35:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:57.803 Cannot find device "nvmf_tgt_br" 00:24:57.803 13:35:15 -- nvmf/common.sh@158 -- # true 00:24:57.803 13:35:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:57.803 Cannot find device "nvmf_tgt_br2" 00:24:57.803 13:35:15 -- nvmf/common.sh@159 -- # true 00:24:57.803 13:35:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:57.803 13:35:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:57.803 13:35:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:57.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.803 13:35:15 -- nvmf/common.sh@162 -- # true 00:24:57.803 13:35:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:57.803 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.803 13:35:15 -- nvmf/common.sh@163 -- # true 00:24:57.803 13:35:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:57.803 13:35:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:57.803 13:35:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:57.803 13:35:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:57.803 13:35:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:57.803 13:35:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:58.061 13:35:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:58.061 13:35:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:58.061 13:35:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:58.061 13:35:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:58.061 13:35:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:58.061 13:35:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:58.061 13:35:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:58.061 13:35:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:58.061 13:35:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:58.061 13:35:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:58.061 13:35:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:58.061 13:35:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:58.061 13:35:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:58.061 13:35:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:58.061 13:35:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:58.061 13:35:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:58.061 13:35:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:58.061 13:35:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:58.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:24:58.061 00:24:58.061 --- 10.0.0.2 ping statistics --- 00:24:58.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.061 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:24:58.061 13:35:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:58.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:58.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:24:58.061 00:24:58.061 --- 10.0.0.3 ping statistics --- 00:24:58.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.061 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:58.061 13:35:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:58.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:24:58.061 00:24:58.061 --- 10.0.0.1 ping statistics --- 00:24:58.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.061 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:58.061 13:35:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.061 13:35:15 -- nvmf/common.sh@422 -- # return 0 00:24:58.061 13:35:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:58.061 13:35:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.061 13:35:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:58.061 13:35:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:58.061 13:35:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.061 13:35:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:58.061 13:35:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:58.061 13:35:15 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:58.061 13:35:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:58.061 13:35:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:58.061 13:35:15 -- common/autotest_common.sh@10 -- # set +x 00:24:58.061 13:35:15 -- nvmf/common.sh@470 -- # nvmfpid=80757 00:24:58.061 13:35:15 -- nvmf/common.sh@471 -- # waitforlisten 80757 00:24:58.061 13:35:15 -- common/autotest_common.sh@817 -- # '[' -z 80757 ']' 00:24:58.061 13:35:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.061 13:35:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.061 13:35:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.061 13:35:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.061 13:35:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.061 13:35:15 -- common/autotest_common.sh@10 -- # set +x 00:24:58.061 [2024-04-26 13:35:15.496210] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:24:58.061 [2024-04-26 13:35:15.496303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.319 [2024-04-26 13:35:15.634050] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.577 [2024-04-26 13:35:15.779407] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.577 [2024-04-26 13:35:15.779472] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.577 [2024-04-26 13:35:15.779485] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.577 [2024-04-26 13:35:15.779493] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.577 [2024-04-26 13:35:15.779501] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.577 [2024-04-26 13:35:15.779703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.577 [2024-04-26 13:35:15.779806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.577 [2024-04-26 13:35:15.780609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.577 [2024-04-26 13:35:15.780611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.144 13:35:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.144 13:35:16 -- common/autotest_common.sh@850 -- # return 0 00:24:59.144 13:35:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:59.144 13:35:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:59.144 13:35:16 -- common/autotest_common.sh@10 -- # set +x 00:24:59.144 13:35:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.144 13:35:16 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:24:59.144 13:35:16 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:59.710 13:35:16 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:24:59.710 13:35:16 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:59.968 13:35:17 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:24:59.968 13:35:17 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:00.225 13:35:17 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:00.225 13:35:17 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:25:00.225 13:35:17 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:00.225 13:35:17 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:00.225 13:35:17 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:00.790 [2024-04-26 13:35:17.935002] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.790 13:35:17 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.048 13:35:18 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:01.048 13:35:18 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.306 13:35:18 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:01.306 13:35:18 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:01.563 13:35:18 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.128 [2024-04-26 13:35:19.288959] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.128 13:35:19 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:02.385 13:35:19 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:02.385 13:35:19 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:02.385 13:35:19 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:02.385 13:35:19 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:03.321 Initializing NVMe Controllers 00:25:03.321 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:25:03.321 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:25:03.321 Initialization complete. Launching workers. 00:25:03.321 ======================================================== 00:25:03.321 Latency(us) 00:25:03.321 Device Information : IOPS MiB/s Average min max 00:25:03.321 PCIE (0000:00:10.0) NSID 1 from core 0: 24283.31 94.86 1318.07 293.60 9485.67 00:25:03.321 ======================================================== 00:25:03.321 Total : 24283.31 94.86 1318.07 293.60 9485.67 00:25:03.321 00:25:03.321 13:35:20 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:04.697 Initializing NVMe Controllers 00:25:04.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:04.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:04.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:04.697 Initialization complete. Launching workers. 00:25:04.697 ======================================================== 00:25:04.697 Latency(us) 00:25:04.697 Device Information : IOPS MiB/s Average min max 00:25:04.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3399.39 13.28 293.82 118.73 4229.25 00:25:04.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.51 0.48 8160.63 7950.90 12041.87 00:25:04.697 ======================================================== 00:25:04.697 Total : 3522.89 13.76 569.61 118.73 12041.87 00:25:04.697 00:25:04.697 13:35:22 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.069 Initializing NVMe Controllers 00:25:06.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:06.069 Initialization complete. Launching workers. 00:25:06.069 ======================================================== 00:25:06.069 Latency(us) 00:25:06.069 Device Information : IOPS MiB/s Average min max 00:25:06.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8501.63 33.21 3766.13 759.89 8851.48 00:25:06.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2711.76 10.59 11925.75 5945.34 24038.03 00:25:06.069 ======================================================== 00:25:06.069 Total : 11213.39 43.80 5739.39 759.89 24038.03 00:25:06.069 00:25:06.069 13:35:23 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:25:06.069 13:35:23 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.655 Initializing NVMe Controllers 00:25:08.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.655 Controller IO queue size 128, less than required. 00:25:08.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.655 Controller IO queue size 128, less than required. 00:25:08.655 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:08.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:08.655 Initialization complete. Launching workers. 00:25:08.655 ======================================================== 00:25:08.655 Latency(us) 00:25:08.655 Device Information : IOPS MiB/s Average min max 00:25:08.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1455.96 363.99 89173.49 46226.05 152897.70 00:25:08.655 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 541.49 135.37 246510.39 73081.71 351649.67 00:25:08.655 ======================================================== 00:25:08.655 Total : 1997.45 499.36 131825.77 46226.05 351649.67 00:25:08.655 00:25:08.655 13:35:25 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:08.913 No valid NVMe controllers or AIO or URING devices found 00:25:08.913 Initializing NVMe Controllers 00:25:08.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.913 Controller IO queue size 128, less than required. 00:25:08.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.913 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:08.913 Controller IO queue size 128, less than required. 00:25:08.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:08.913 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:25:08.913 WARNING: Some requested NVMe devices were skipped 00:25:08.913 13:35:26 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:11.455 Initializing NVMe Controllers 00:25:11.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:11.455 Controller IO queue size 128, less than required. 00:25:11.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:11.455 Controller IO queue size 128, less than required. 00:25:11.455 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:11.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:11.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:11.455 Initialization complete. Launching workers. 00:25:11.455 00:25:11.455 ==================== 00:25:11.455 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:11.455 TCP transport: 00:25:11.455 polls: 8782 00:25:11.455 idle_polls: 3469 00:25:11.455 sock_completions: 5313 00:25:11.455 nvme_completions: 3167 00:25:11.455 submitted_requests: 4700 00:25:11.455 queued_requests: 1 00:25:11.455 00:25:11.455 ==================== 00:25:11.455 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:11.455 TCP transport: 00:25:11.455 polls: 9324 00:25:11.455 idle_polls: 5629 00:25:11.455 sock_completions: 3695 00:25:11.455 nvme_completions: 6837 00:25:11.455 submitted_requests: 10212 00:25:11.455 queued_requests: 1 00:25:11.455 ======================================================== 00:25:11.455 Latency(us) 00:25:11.455 Device Information : IOPS MiB/s Average min max 00:25:11.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 791.06 197.77 167115.22 101420.57 249355.88 00:25:11.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1708.06 427.01 74876.68 34450.50 126946.01 00:25:11.455 ======================================================== 00:25:11.455 Total : 2499.12 624.78 104073.56 34450.50 249355.88 00:25:11.455 00:25:11.455 13:35:28 -- host/perf.sh@66 -- # sync 00:25:11.455 13:35:28 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:11.712 13:35:29 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:11.712 13:35:29 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:11.712 13:35:29 -- host/perf.sh@114 -- # nvmftestfini 00:25:11.712 13:35:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:11.712 13:35:29 -- nvmf/common.sh@117 -- # sync 00:25:11.712 13:35:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.712 13:35:29 -- nvmf/common.sh@120 -- # set +e 00:25:11.712 13:35:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.712 13:35:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.712 rmmod nvme_tcp 00:25:11.712 rmmod nvme_fabrics 00:25:11.712 rmmod nvme_keyring 00:25:11.712 13:35:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.712 13:35:29 -- nvmf/common.sh@124 -- # set -e 00:25:11.712 13:35:29 -- nvmf/common.sh@125 -- # return 0 00:25:11.712 13:35:29 -- nvmf/common.sh@478 -- # '[' -n 80757 ']' 00:25:11.712 13:35:29 -- nvmf/common.sh@479 -- # killprocess 80757 00:25:11.712 13:35:29 -- common/autotest_common.sh@936 -- # '[' -z 80757 ']' 00:25:11.712 13:35:29 -- common/autotest_common.sh@940 -- # kill -0 80757 00:25:11.712 13:35:29 -- common/autotest_common.sh@941 -- # uname 00:25:11.712 13:35:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.712 13:35:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80757 00:25:11.712 killing process with pid 80757 00:25:11.712 13:35:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:11.712 13:35:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:11.712 13:35:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80757' 00:25:11.712 13:35:29 -- common/autotest_common.sh@955 -- # kill 80757 00:25:11.712 13:35:29 -- common/autotest_common.sh@960 -- # wait 80757 00:25:12.645 13:35:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:12.645 13:35:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:12.645 13:35:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:12.645 13:35:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.645 13:35:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.645 13:35:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.645 13:35:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.645 13:35:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.645 13:35:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:12.645 00:25:12.645 real 0m15.003s 00:25:12.645 user 0m55.247s 00:25:12.645 sys 0m3.721s 00:25:12.645 13:35:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:12.645 13:35:29 -- common/autotest_common.sh@10 -- # set +x 00:25:12.645 ************************************ 00:25:12.645 END TEST nvmf_perf 00:25:12.645 ************************************ 00:25:12.645 13:35:29 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:12.645 13:35:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:12.645 13:35:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:12.645 13:35:29 -- common/autotest_common.sh@10 -- # set +x 00:25:12.645 ************************************ 00:25:12.645 START TEST nvmf_fio_host 00:25:12.645 ************************************ 00:25:12.645 13:35:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:12.903 * Looking for test storage... 00:25:12.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:12.903 13:35:30 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:12.903 13:35:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.903 13:35:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.903 13:35:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.903 13:35:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- paths/export.sh@5 -- # export PATH 00:25:12.903 13:35:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:12.903 13:35:30 -- nvmf/common.sh@7 -- # uname -s 00:25:12.903 13:35:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.903 13:35:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.903 13:35:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.903 13:35:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.903 13:35:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.903 13:35:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.903 13:35:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.903 13:35:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.903 13:35:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.903 13:35:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.903 13:35:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:25:12.903 13:35:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:25:12.903 13:35:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.903 13:35:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.903 13:35:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:12.903 13:35:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.903 13:35:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:12.903 13:35:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.903 13:35:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.903 13:35:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.903 13:35:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- paths/export.sh@5 -- # export PATH 00:25:12.903 13:35:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.903 13:35:30 -- nvmf/common.sh@47 -- # : 0 00:25:12.903 13:35:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:12.903 13:35:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:12.903 13:35:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.903 13:35:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.903 13:35:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.903 13:35:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:12.903 13:35:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:12.903 13:35:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:12.903 13:35:30 -- host/fio.sh@12 -- # nvmftestinit 00:25:12.903 13:35:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:12.903 13:35:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.903 13:35:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:12.903 13:35:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:12.903 13:35:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:12.903 13:35:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.903 13:35:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.903 13:35:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.903 13:35:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:12.904 13:35:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:12.904 13:35:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:12.904 13:35:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:12.904 13:35:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:12.904 13:35:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:12.904 13:35:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:12.904 13:35:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:12.904 13:35:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:12.904 13:35:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:12.904 13:35:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:12.904 13:35:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:12.904 13:35:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:12.904 13:35:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:12.904 13:35:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:12.904 13:35:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:12.904 13:35:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:12.904 13:35:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:12.904 13:35:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:12.904 13:35:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:12.904 Cannot find device "nvmf_tgt_br" 00:25:12.904 13:35:30 -- nvmf/common.sh@155 -- # true 00:25:12.904 13:35:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:12.904 Cannot find device "nvmf_tgt_br2" 00:25:12.904 13:35:30 -- nvmf/common.sh@156 -- # true 00:25:12.904 13:35:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:12.904 13:35:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:12.904 Cannot find device "nvmf_tgt_br" 00:25:12.904 13:35:30 -- nvmf/common.sh@158 -- # true 00:25:12.904 13:35:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:12.904 Cannot find device "nvmf_tgt_br2" 00:25:12.904 13:35:30 -- nvmf/common.sh@159 -- # true 00:25:12.904 13:35:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:12.904 13:35:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:13.161 13:35:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.161 13:35:30 -- nvmf/common.sh@162 -- # true 00:25:13.161 13:35:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.161 13:35:30 -- nvmf/common.sh@163 -- # true 00:25:13.161 13:35:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.161 13:35:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.161 13:35:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.161 13:35:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.161 13:35:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.161 13:35:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.161 13:35:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.161 13:35:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:13.161 13:35:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:13.161 13:35:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:13.161 13:35:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:13.161 13:35:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:13.161 13:35:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:13.161 13:35:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.161 13:35:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.161 13:35:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.161 13:35:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:13.161 13:35:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:13.161 13:35:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.161 13:35:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.161 13:35:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.161 13:35:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.161 13:35:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.161 13:35:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:13.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:25:13.161 00:25:13.161 --- 10.0.0.2 ping statistics --- 00:25:13.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.161 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:25:13.161 13:35:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:13.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:25:13.161 00:25:13.161 --- 10.0.0.3 ping statistics --- 00:25:13.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.161 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:25:13.161 13:35:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:25:13.161 00:25:13.161 --- 10.0.0.1 ping statistics --- 00:25:13.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.161 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:13.161 13:35:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.161 13:35:30 -- nvmf/common.sh@422 -- # return 0 00:25:13.161 13:35:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:13.161 13:35:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.161 13:35:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:13.161 13:35:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:13.161 13:35:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.161 13:35:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:13.161 13:35:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:13.161 13:35:30 -- host/fio.sh@14 -- # [[ y != y ]] 00:25:13.161 13:35:30 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:25:13.161 13:35:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:13.161 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:13.161 13:35:30 -- host/fio.sh@22 -- # nvmfpid=81246 00:25:13.161 13:35:30 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.161 13:35:30 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:13.161 13:35:30 -- host/fio.sh@26 -- # waitforlisten 81246 00:25:13.161 13:35:30 -- common/autotest_common.sh@817 -- # '[' -z 81246 ']' 00:25:13.161 13:35:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.161 13:35:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:13.161 13:35:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.161 13:35:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:13.161 13:35:30 -- common/autotest_common.sh@10 -- # set +x 00:25:13.419 [2024-04-26 13:35:30.640372] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:13.419 [2024-04-26 13:35:30.640474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.419 [2024-04-26 13:35:30.776020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.677 [2024-04-26 13:35:30.892607] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.677 [2024-04-26 13:35:30.892659] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.677 [2024-04-26 13:35:30.892671] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.677 [2024-04-26 13:35:30.892680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.677 [2024-04-26 13:35:30.892693] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.677 [2024-04-26 13:35:30.892866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.677 [2024-04-26 13:35:30.893075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.677 [2024-04-26 13:35:30.893743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.677 [2024-04-26 13:35:30.893774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.267 13:35:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:14.267 13:35:31 -- common/autotest_common.sh@850 -- # return 0 00:25:14.267 13:35:31 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.267 13:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.267 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.267 [2024-04-26 13:35:31.679243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.267 13:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.267 13:35:31 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:25:14.267 13:35:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:14.267 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 13:35:31 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:14.525 13:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.525 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 Malloc1 00:25:14.525 13:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.525 13:35:31 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.525 13:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.525 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 13:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.525 13:35:31 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:14.525 13:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.525 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 13:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.525 13:35:31 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.525 13:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.525 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 [2024-04-26 13:35:31.785110] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.525 13:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.525 13:35:31 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:14.525 13:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:14.525 13:35:31 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 13:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:14.525 13:35:31 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:25:14.525 13:35:31 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:14.525 13:35:31 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:14.525 13:35:31 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:14.525 13:35:31 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:14.525 13:35:31 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:14.525 13:35:31 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:14.525 13:35:31 -- common/autotest_common.sh@1327 -- # shift 00:25:14.525 13:35:31 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:14.525 13:35:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:14.525 13:35:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:14.525 13:35:31 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:14.525 13:35:31 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:14.525 13:35:31 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:14.525 13:35:31 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:14.525 13:35:31 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:14.783 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:14.783 fio-3.35 00:25:14.783 Starting 1 thread 00:25:17.306 00:25:17.306 test: (groupid=0, jobs=1): err= 0: pid=81324: Fri Apr 26 13:35:34 2024 00:25:17.306 read: IOPS=8591, BW=33.6MiB/s (35.2MB/s)(67.4MiB/2007msec) 00:25:17.306 slat (nsec): min=1981, max=366215, avg=2585.76, stdev=3535.08 00:25:17.306 clat (usec): min=3278, max=14049, avg=7783.01, stdev=550.75 00:25:17.306 lat (usec): min=3320, max=14052, avg=7785.60, stdev=550.50 00:25:17.306 clat percentiles (usec): 00:25:17.306 | 1.00th=[ 6652], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7373], 00:25:17.306 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7767], 60.00th=[ 7898], 00:25:17.306 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:25:17.306 | 99.00th=[ 9110], 99.50th=[ 9372], 99.90th=[12518], 99.95th=[13435], 00:25:17.306 | 99.99th=[13960] 00:25:17.306 bw ( KiB/s): min=33296, max=34776, per=99.98%, avg=34360.00, stdev=713.96, samples=4 00:25:17.306 iops : min= 8324, max= 8694, avg=8590.00, stdev=178.49, samples=4 00:25:17.306 write: IOPS=8590, BW=33.6MiB/s (35.2MB/s)(67.3MiB/2007msec); 0 zone resets 00:25:17.306 slat (usec): min=2, max=287, avg= 2.68, stdev= 2.45 00:25:17.306 clat (usec): min=2498, max=13536, avg=7050.75, stdev=478.66 00:25:17.306 lat (usec): min=2512, max=13538, avg=7053.43, stdev=478.52 00:25:17.306 clat percentiles (usec): 00:25:17.306 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6718], 00:25:17.306 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7177], 00:25:17.306 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7570], 95.00th=[ 7701], 00:25:17.306 | 99.00th=[ 8094], 99.50th=[ 8225], 99.90th=[11338], 99.95th=[12649], 00:25:17.306 | 99.99th=[13566] 00:25:17.306 bw ( KiB/s): min=34048, max=34624, per=99.98%, avg=34354.00, stdev=256.86, samples=4 00:25:17.306 iops : min= 8512, max= 8656, avg=8588.50, stdev=64.22, samples=4 00:25:17.306 lat (msec) : 4=0.07%, 10=99.72%, 20=0.21% 00:25:17.306 cpu : usr=67.60%, sys=24.08%, ctx=6, majf=0, minf=6 00:25:17.306 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:17.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:17.306 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:17.306 issued rwts: total=17243,17241,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:17.306 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:17.306 00:25:17.306 Run status group 0 (all jobs): 00:25:17.306 READ: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.4MiB (70.6MB), run=2007-2007msec 00:25:17.306 WRITE: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=67.3MiB (70.6MB), run=2007-2007msec 00:25:17.307 13:35:34 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:17.307 13:35:34 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:17.307 13:35:34 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:25:17.307 13:35:34 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:17.307 13:35:34 -- common/autotest_common.sh@1325 -- # local sanitizers 00:25:17.307 13:35:34 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.307 13:35:34 -- common/autotest_common.sh@1327 -- # shift 00:25:17.307 13:35:34 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:25:17.307 13:35:34 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # grep libasan 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:17.307 13:35:34 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:17.307 13:35:34 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:25:17.307 13:35:34 -- common/autotest_common.sh@1331 -- # asan_lib= 00:25:17.307 13:35:34 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:25:17.307 13:35:34 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:25:17.307 13:35:34 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:17.307 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:17.307 fio-3.35 00:25:17.307 Starting 1 thread 00:25:19.837 00:25:19.837 test: (groupid=0, jobs=1): err= 0: pid=81373: Fri Apr 26 13:35:36 2024 00:25:19.837 read: IOPS=7421, BW=116MiB/s (122MB/s)(233MiB/2006msec) 00:25:19.837 slat (usec): min=3, max=274, avg= 4.21, stdev= 3.35 00:25:19.837 clat (usec): min=2367, max=20680, avg=10197.20, stdev=2517.96 00:25:19.837 lat (usec): min=2371, max=20685, avg=10201.41, stdev=2518.30 00:25:19.837 clat percentiles (usec): 00:25:19.837 | 1.00th=[ 5538], 5.00th=[ 6652], 10.00th=[ 7242], 20.00th=[ 8029], 00:25:19.837 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10683], 00:25:19.837 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13304], 95.00th=[14746], 00:25:19.837 | 99.00th=[17695], 99.50th=[19006], 99.90th=[19792], 99.95th=[20317], 00:25:19.837 | 99.99th=[20579] 00:25:19.837 bw ( KiB/s): min=55424, max=69056, per=51.99%, avg=61728.00, stdev=6533.12, samples=4 00:25:19.837 iops : min= 3464, max= 4316, avg=3858.00, stdev=408.32, samples=4 00:25:19.837 write: IOPS=4547, BW=71.0MiB/s (74.5MB/s)(126MiB/1778msec); 0 zone resets 00:25:19.837 slat (usec): min=35, max=336, avg=39.87, stdev= 8.92 00:25:19.837 clat (usec): min=3199, max=20336, avg=12163.10, stdev=2236.31 00:25:19.837 lat (usec): min=3252, max=20379, avg=12202.98, stdev=2236.78 00:25:19.837 clat percentiles (usec): 00:25:19.838 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10290], 00:25:19.838 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12387], 00:25:19.838 | 70.00th=[13042], 80.00th=[13829], 90.00th=[15401], 95.00th=[16319], 00:25:19.838 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19792], 99.95th=[20055], 00:25:19.838 | 99.99th=[20317] 00:25:19.838 bw ( KiB/s): min=56672, max=72704, per=88.09%, avg=64088.00, stdev=7673.53, samples=4 00:25:19.838 iops : min= 3542, max= 4544, avg=4005.50, stdev=479.60, samples=4 00:25:19.838 lat (msec) : 4=0.20%, 10=37.73%, 20=61.99%, 50=0.08% 00:25:19.838 cpu : usr=68.73%, sys=19.95%, ctx=561, majf=0, minf=17 00:25:19.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:19.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:19.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:19.838 issued rwts: total=14887,8085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:19.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:19.838 00:25:19.838 Run status group 0 (all jobs): 00:25:19.838 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=233MiB (244MB), run=2006-2006msec 00:25:19.838 WRITE: bw=71.0MiB/s (74.5MB/s), 71.0MiB/s-71.0MiB/s (74.5MB/s-74.5MB/s), io=126MiB (132MB), run=1778-1778msec 00:25:19.838 13:35:36 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:19.838 13:35:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.838 13:35:36 -- common/autotest_common.sh@10 -- # set +x 00:25:19.838 13:35:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.838 13:35:36 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:25:19.838 13:35:36 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:19.838 13:35:36 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:25:19.838 13:35:36 -- host/fio.sh@84 -- # nvmftestfini 00:25:19.838 13:35:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:19.838 13:35:36 -- nvmf/common.sh@117 -- # sync 00:25:19.838 13:35:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:19.838 13:35:36 -- nvmf/common.sh@120 -- # set +e 00:25:19.838 13:35:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:19.838 13:35:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:19.838 rmmod nvme_tcp 00:25:19.838 rmmod nvme_fabrics 00:25:19.838 rmmod nvme_keyring 00:25:19.838 13:35:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:19.838 13:35:36 -- nvmf/common.sh@124 -- # set -e 00:25:19.838 13:35:36 -- nvmf/common.sh@125 -- # return 0 00:25:19.838 13:35:36 -- nvmf/common.sh@478 -- # '[' -n 81246 ']' 00:25:19.838 13:35:36 -- nvmf/common.sh@479 -- # killprocess 81246 00:25:19.838 13:35:36 -- common/autotest_common.sh@936 -- # '[' -z 81246 ']' 00:25:19.838 13:35:36 -- common/autotest_common.sh@940 -- # kill -0 81246 00:25:19.838 13:35:36 -- common/autotest_common.sh@941 -- # uname 00:25:19.838 13:35:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.838 13:35:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81246 00:25:19.838 killing process with pid 81246 00:25:19.838 13:35:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.838 13:35:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.838 13:35:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81246' 00:25:19.838 13:35:36 -- common/autotest_common.sh@955 -- # kill 81246 00:25:19.838 13:35:36 -- common/autotest_common.sh@960 -- # wait 81246 00:25:19.838 13:35:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:19.838 13:35:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:19.838 13:35:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:19.838 13:35:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.838 13:35:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:19.838 13:35:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.838 13:35:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.838 13:35:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.838 13:35:37 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:19.838 00:25:19.838 real 0m7.206s 00:25:19.838 user 0m27.789s 00:25:19.838 sys 0m2.174s 00:25:19.838 13:35:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:19.838 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:19.838 ************************************ 00:25:19.838 END TEST nvmf_fio_host 00:25:19.838 ************************************ 00:25:20.096 13:35:37 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:20.096 13:35:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:20.096 13:35:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:20.096 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.096 ************************************ 00:25:20.096 START TEST nvmf_failover 00:25:20.096 ************************************ 00:25:20.096 13:35:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:20.096 * Looking for test storage... 00:25:20.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:20.096 13:35:37 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:20.096 13:35:37 -- nvmf/common.sh@7 -- # uname -s 00:25:20.096 13:35:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.096 13:35:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.096 13:35:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.096 13:35:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.096 13:35:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.096 13:35:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.096 13:35:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.096 13:35:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.096 13:35:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.096 13:35:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.096 13:35:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:25:20.097 13:35:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:25:20.097 13:35:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.097 13:35:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.097 13:35:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:20.097 13:35:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.097 13:35:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:20.097 13:35:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.097 13:35:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.097 13:35:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.097 13:35:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.097 13:35:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.097 13:35:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.097 13:35:37 -- paths/export.sh@5 -- # export PATH 00:25:20.097 13:35:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.097 13:35:37 -- nvmf/common.sh@47 -- # : 0 00:25:20.097 13:35:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.097 13:35:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.097 13:35:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.097 13:35:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.097 13:35:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.097 13:35:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.097 13:35:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.097 13:35:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.097 13:35:37 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.097 13:35:37 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.097 13:35:37 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.097 13:35:37 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.097 13:35:37 -- host/failover.sh@18 -- # nvmftestinit 00:25:20.097 13:35:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:20.097 13:35:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.097 13:35:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:20.097 13:35:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:20.097 13:35:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:20.097 13:35:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.097 13:35:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.097 13:35:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.097 13:35:37 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:20.097 13:35:37 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:20.097 13:35:37 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:20.097 13:35:37 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:20.097 13:35:37 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:20.097 13:35:37 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:20.097 13:35:37 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.097 13:35:37 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.097 13:35:37 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:20.097 13:35:37 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:20.097 13:35:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:20.097 13:35:37 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:20.097 13:35:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:20.097 13:35:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.097 13:35:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:20.097 13:35:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:20.097 13:35:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:20.097 13:35:37 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:20.097 13:35:37 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:20.355 13:35:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:20.355 Cannot find device "nvmf_tgt_br" 00:25:20.355 13:35:37 -- nvmf/common.sh@155 -- # true 00:25:20.355 13:35:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:20.355 Cannot find device "nvmf_tgt_br2" 00:25:20.355 13:35:37 -- nvmf/common.sh@156 -- # true 00:25:20.355 13:35:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:20.355 13:35:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:20.355 Cannot find device "nvmf_tgt_br" 00:25:20.355 13:35:37 -- nvmf/common.sh@158 -- # true 00:25:20.355 13:35:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:20.355 Cannot find device "nvmf_tgt_br2" 00:25:20.355 13:35:37 -- nvmf/common.sh@159 -- # true 00:25:20.355 13:35:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:20.355 13:35:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:20.355 13:35:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:20.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:20.355 13:35:37 -- nvmf/common.sh@162 -- # true 00:25:20.355 13:35:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:20.355 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:20.355 13:35:37 -- nvmf/common.sh@163 -- # true 00:25:20.355 13:35:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:20.355 13:35:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:20.355 13:35:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:20.355 13:35:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:20.355 13:35:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:20.355 13:35:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:20.355 13:35:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:20.355 13:35:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:20.355 13:35:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:20.355 13:35:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:20.355 13:35:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:20.355 13:35:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:20.355 13:35:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:20.355 13:35:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:20.355 13:35:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:20.355 13:35:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:20.355 13:35:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:20.355 13:35:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:20.355 13:35:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:20.614 13:35:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:20.614 13:35:37 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:20.614 13:35:37 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:20.614 13:35:37 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:20.614 13:35:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:20.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:25:20.614 00:25:20.614 --- 10.0.0.2 ping statistics --- 00:25:20.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.614 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:20.614 13:35:37 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:20.614 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:20.614 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:25:20.614 00:25:20.614 --- 10.0.0.3 ping statistics --- 00:25:20.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.614 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:20.614 13:35:37 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:20.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:20.614 00:25:20.614 --- 10.0.0.1 ping statistics --- 00:25:20.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.614 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:20.614 13:35:37 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.614 13:35:37 -- nvmf/common.sh@422 -- # return 0 00:25:20.614 13:35:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:20.614 13:35:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.614 13:35:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:20.614 13:35:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:20.614 13:35:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.614 13:35:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:20.614 13:35:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:20.614 13:35:37 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:20.614 13:35:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:20.614 13:35:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:20.614 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.614 13:35:37 -- nvmf/common.sh@470 -- # nvmfpid=81591 00:25:20.614 13:35:37 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:20.614 13:35:37 -- nvmf/common.sh@471 -- # waitforlisten 81591 00:25:20.614 13:35:37 -- common/autotest_common.sh@817 -- # '[' -z 81591 ']' 00:25:20.614 13:35:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.614 13:35:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:20.614 13:35:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.614 13:35:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:20.614 13:35:37 -- common/autotest_common.sh@10 -- # set +x 00:25:20.614 [2024-04-26 13:35:37.942273] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:20.614 [2024-04-26 13:35:37.942574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.873 [2024-04-26 13:35:38.082323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:20.873 [2024-04-26 13:35:38.236434] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.873 [2024-04-26 13:35:38.236908] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.873 [2024-04-26 13:35:38.237062] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.873 [2024-04-26 13:35:38.237199] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.873 [2024-04-26 13:35:38.237331] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.873 [2024-04-26 13:35:38.237730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.873 [2024-04-26 13:35:38.238153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.873 [2024-04-26 13:35:38.238277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.807 13:35:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.808 13:35:38 -- common/autotest_common.sh@850 -- # return 0 00:25:21.808 13:35:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:21.808 13:35:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:21.808 13:35:38 -- common/autotest_common.sh@10 -- # set +x 00:25:21.808 13:35:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.808 13:35:38 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:21.808 [2024-04-26 13:35:39.241600] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.067 13:35:39 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:22.325 Malloc0 00:25:22.325 13:35:39 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.584 13:35:39 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.843 13:35:40 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.843 [2024-04-26 13:35:40.290434] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.102 13:35:40 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:23.102 [2024-04-26 13:35:40.530596] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:23.361 13:35:40 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:23.620 [2024-04-26 13:35:40.822894] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:23.620 13:35:40 -- host/failover.sh@31 -- # bdevperf_pid=81705 00:25:23.620 13:35:40 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.620 13:35:40 -- host/failover.sh@34 -- # waitforlisten 81705 /var/tmp/bdevperf.sock 00:25:23.620 13:35:40 -- common/autotest_common.sh@817 -- # '[' -z 81705 ']' 00:25:23.620 13:35:40 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:23.620 13:35:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.620 13:35:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:23.620 13:35:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.620 13:35:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:23.620 13:35:40 -- common/autotest_common.sh@10 -- # set +x 00:25:24.556 13:35:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.557 13:35:41 -- common/autotest_common.sh@850 -- # return 0 00:25:24.557 13:35:41 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:24.816 NVMe0n1 00:25:24.816 13:35:42 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.075 00:25:25.334 13:35:42 -- host/failover.sh@39 -- # run_test_pid=81757 00:25:25.335 13:35:42 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:25.335 13:35:42 -- host/failover.sh@41 -- # sleep 1 00:25:26.272 13:35:43 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.531 [2024-04-26 13:35:43.813924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.531 [2024-04-26 13:35:43.813985] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.531 [2024-04-26 13:35:43.813998] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.531 [2024-04-26 13:35:43.814008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.531 [2024-04-26 13:35:43.814017] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814053] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814061] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814161] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 [2024-04-26 13:35:43.814177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34190 is same with the state(5) to be set 00:25:26.532 13:35:43 -- host/failover.sh@45 -- # sleep 3 00:25:29.815 13:35:46 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:29.815 00:25:29.815 13:35:47 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:30.073 [2024-04-26 13:35:47.441660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.073 [2024-04-26 13:35:47.441869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441900] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441908] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441958] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.441991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442043] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 [2024-04-26 13:35:47.442281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc34d10 is same with the state(5) to be set 00:25:30.074 13:35:47 -- host/failover.sh@50 -- # sleep 3 00:25:33.359 13:35:50 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.359 [2024-04-26 13:35:50.740397] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.359 13:35:50 -- host/failover.sh@55 -- # sleep 1 00:25:34.733 13:35:51 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:34.733 [2024-04-26 13:35:52.091993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092051] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092088] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.733 [2024-04-26 13:35:52.092163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092172] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092249] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092375] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092408] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092416] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 [2024-04-26 13:35:52.092432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8c7f0 is same with the state(5) to be set 00:25:34.734 13:35:52 -- host/failover.sh@59 -- # wait 81757 00:25:41.321 0 00:25:41.321 13:35:57 -- host/failover.sh@61 -- # killprocess 81705 00:25:41.321 13:35:57 -- common/autotest_common.sh@936 -- # '[' -z 81705 ']' 00:25:41.321 13:35:57 -- common/autotest_common.sh@940 -- # kill -0 81705 00:25:41.321 13:35:57 -- common/autotest_common.sh@941 -- # uname 00:25:41.321 13:35:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:41.321 13:35:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81705 00:25:41.321 killing process with pid 81705 00:25:41.321 13:35:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:41.321 13:35:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:41.321 13:35:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81705' 00:25:41.321 13:35:57 -- common/autotest_common.sh@955 -- # kill 81705 00:25:41.321 13:35:57 -- common/autotest_common.sh@960 -- # wait 81705 00:25:41.321 13:35:58 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:41.321 [2024-04-26 13:35:40.904336] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:41.321 [2024-04-26 13:35:40.904477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81705 ] 00:25:41.321 [2024-04-26 13:35:41.044582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.321 [2024-04-26 13:35:41.164731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.321 Running I/O for 15 seconds... 00:25:41.321 [2024-04-26 13:35:43.814848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.814898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.321 [2024-04-26 13:35:43.814928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.814944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.321 [2024-04-26 13:35:43.814961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.814976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.321 [2024-04-26 13:35:43.814991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.815005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.321 [2024-04-26 13:35:43.815021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.815034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.321 [2024-04-26 13:35:43.815049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.815063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.321 [2024-04-26 13:35:43.815078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.815092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.321 [2024-04-26 13:35:43.815107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.321 [2024-04-26 13:35:43.815120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.815984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.815998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.816013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.816026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.816041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.816062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.816079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.816092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.816107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.322 [2024-04-26 13:35:43.816120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.322 [2024-04-26 13:35:43.816135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.816978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.817007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.817022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.817036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.817052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.817066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.817081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.817094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.323 [2024-04-26 13:35:43.817110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.323 [2024-04-26 13:35:43.817123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.324 [2024-04-26 13:35:43.817684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.817985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.817999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.818014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.818028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.818043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.818056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.818072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.818085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.818100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.324 [2024-04-26 13:35:43.818113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.324 [2024-04-26 13:35:43.818128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.325 [2024-04-26 13:35:43.818668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:43.818811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.325 [2024-04-26 13:35:43.818872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.325 [2024-04-26 13:35:43.818884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78576 len:8 PRP1 0x0 PRP2 0x0 00:25:41.325 [2024-04-26 13:35:43.818897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.818966] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf682e0 was disconnected and freed. reset controller. 00:25:41.325 [2024-04-26 13:35:43.818994] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:41.325 [2024-04-26 13:35:43.819056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.325 [2024-04-26 13:35:43.819076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.819091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.325 [2024-04-26 13:35:43.819105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.819119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.325 [2024-04-26 13:35:43.819132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.819146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.325 [2024-04-26 13:35:43.819159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:43.819173] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.325 [2024-04-26 13:35:43.819220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf005e0 (9): Bad file descriptor 00:25:41.325 [2024-04-26 13:35:43.823078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.325 [2024-04-26 13:35:43.863862] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.325 [2024-04-26 13:35:47.442411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:47.442485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:47.442519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:47.442535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.325 [2024-04-26 13:35:47.442586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.325 [2024-04-26 13:35:47.442601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.442961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.442976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.326 [2024-04-26 13:35:47.443510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.326 [2024-04-26 13:35:47.443524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.443975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.443990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.327 [2024-04-26 13:35:47.444361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.327 [2024-04-26 13:35:47.444374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.444967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.444989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.328 [2024-04-26 13:35:47.445394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.328 [2024-04-26 13:35:47.445407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.445975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.445989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.329 [2024-04-26 13:35:47.446376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.329 [2024-04-26 13:35:47.446389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:47.446404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefcdc0 is same with the state(5) to be set 00:25:41.330 [2024-04-26 13:35:47.446427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.330 [2024-04-26 13:35:47.446445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.330 [2024-04-26 13:35:47.446468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77320 len:8 PRP1 0x0 PRP2 0x0 00:25:41.330 [2024-04-26 13:35:47.446483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:47.446547] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xefcdc0 was disconnected and freed. reset controller. 00:25:41.330 [2024-04-26 13:35:47.446568] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:41.330 [2024-04-26 13:35:47.446624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.330 [2024-04-26 13:35:47.446650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:47.446675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.330 [2024-04-26 13:35:47.446690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:47.446704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.330 [2024-04-26 13:35:47.446717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:47.446731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.330 [2024-04-26 13:35:47.446743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:47.446757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.330 [2024-04-26 13:35:47.446823] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf005e0 (9): Bad file descriptor 00:25:41.330 [2024-04-26 13:35:47.450761] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.330 [2024-04-26 13:35:47.491906] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.330 [2024-04-26 13:35:52.092522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.092981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.092996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.330 [2024-04-26 13:35:52.093152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.330 [2024-04-26 13:35:52.093181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.330 [2024-04-26 13:35:52.093392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.330 [2024-04-26 13:35:52.093407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.093981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.093994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.331 [2024-04-26 13:35:52.094333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.331 [2024-04-26 13:35:52.094353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.332 [2024-04-26 13:35:52.094667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.332 [2024-04-26 13:35:52.094709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.332 [2024-04-26 13:35:52.094743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.332 [2024-04-26 13:35:52.094773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.332 [2024-04-26 13:35:52.094832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.332 [2024-04-26 13:35:52.094859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.332 [2024-04-26 13:35:52.094896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.094985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.094999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.095013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.095027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.095040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.095055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.095067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.095099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.095112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.095128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.095142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.095164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.332 [2024-04-26 13:35:52.095178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.332 [2024-04-26 13:35:52.095193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.333 [2024-04-26 13:35:52.095657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.333 [2024-04-26 13:35:52.095685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.333 [2024-04-26 13:35:52.095720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.333 [2024-04-26 13:35:52.095759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.333 [2024-04-26 13:35:52.095787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.333 [2024-04-26 13:35:52.095827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:41.333 [2024-04-26 13:35:52.095857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.095983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.095998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.096027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.096055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.096084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.096118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.096147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.096175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.333 [2024-04-26 13:35:52.096209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.333 [2024-04-26 13:35:52.096223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.334 [2024-04-26 13:35:52.096554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11123f0 is same with the state(5) to be set 00:25:41.334 [2024-04-26 13:35:52.096593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:41.334 [2024-04-26 13:35:52.096605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:41.334 [2024-04-26 13:35:52.096615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:8 PRP1 0x0 PRP2 0x0 00:25:41.334 [2024-04-26 13:35:52.096628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096691] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11123f0 was disconnected and freed. reset controller. 00:25:41.334 [2024-04-26 13:35:52.096711] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:41.334 [2024-04-26 13:35:52.096795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.334 [2024-04-26 13:35:52.096819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.334 [2024-04-26 13:35:52.096859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.334 [2024-04-26 13:35:52.096887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.334 [2024-04-26 13:35:52.096914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.334 [2024-04-26 13:35:52.096927] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.334 [2024-04-26 13:35:52.100809] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.334 [2024-04-26 13:35:52.100865] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf005e0 (9): Bad file descriptor 00:25:41.334 [2024-04-26 13:35:52.140337] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.334 00:25:41.334 Latency(us) 00:25:41.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.334 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:41.334 Verification LBA range: start 0x0 length 0x4000 00:25:41.334 NVMe0n1 : 15.01 8388.48 32.77 251.43 0.00 14783.74 640.47 24665.37 00:25:41.334 =================================================================================================================== 00:25:41.334 Total : 8388.48 32.77 251.43 0.00 14783.74 640.47 24665.37 00:25:41.334 Received shutdown signal, test time was about 15.000000 seconds 00:25:41.334 00:25:41.334 Latency(us) 00:25:41.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.334 =================================================================================================================== 00:25:41.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:41.334 13:35:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:41.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.334 13:35:58 -- host/failover.sh@65 -- # count=3 00:25:41.334 13:35:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:25:41.334 13:35:58 -- host/failover.sh@73 -- # bdevperf_pid=81961 00:25:41.334 13:35:58 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:41.334 13:35:58 -- host/failover.sh@75 -- # waitforlisten 81961 /var/tmp/bdevperf.sock 00:25:41.334 13:35:58 -- common/autotest_common.sh@817 -- # '[' -z 81961 ']' 00:25:41.334 13:35:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.334 13:35:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:41.334 13:35:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.334 13:35:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:41.334 13:35:58 -- common/autotest_common.sh@10 -- # set +x 00:25:41.900 13:35:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:41.900 13:35:59 -- common/autotest_common.sh@850 -- # return 0 00:25:41.900 13:35:59 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:42.157 [2024-04-26 13:35:59.465551] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:42.157 13:35:59 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:42.416 [2024-04-26 13:35:59.729743] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:42.416 13:35:59 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:42.674 NVMe0n1 00:25:42.674 13:36:00 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:43.241 00:25:43.241 13:36:00 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:43.241 00:25:43.500 13:36:00 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:43.500 13:36:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:25:43.500 13:36:00 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:43.786 13:36:01 -- host/failover.sh@87 -- # sleep 3 00:25:47.069 13:36:04 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:47.069 13:36:04 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:47.327 13:36:04 -- host/failover.sh@90 -- # run_test_pid=82098 00:25:47.327 13:36:04 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:47.327 13:36:04 -- host/failover.sh@92 -- # wait 82098 00:25:48.262 0 00:25:48.262 13:36:05 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:48.262 [2024-04-26 13:35:58.093622] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:48.262 [2024-04-26 13:35:58.093763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81961 ] 00:25:48.262 [2024-04-26 13:35:58.231970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.262 [2024-04-26 13:35:58.365851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.262 [2024-04-26 13:36:01.161590] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:48.262 [2024-04-26 13:36:01.161735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.262 [2024-04-26 13:36:01.161763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.262 [2024-04-26 13:36:01.161794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.262 [2024-04-26 13:36:01.161811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.262 [2024-04-26 13:36:01.161826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.262 [2024-04-26 13:36:01.161841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.262 [2024-04-26 13:36:01.161855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:48.262 [2024-04-26 13:36:01.161870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:48.262 [2024-04-26 13:36:01.161884] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:48.262 [2024-04-26 13:36:01.161936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:48.262 [2024-04-26 13:36:01.161966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aa5e0 (9): Bad file descriptor 00:25:48.262 [2024-04-26 13:36:01.164816] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:48.262 Running I/O for 1 seconds... 00:25:48.262 00:25:48.262 Latency(us) 00:25:48.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.262 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:48.262 Verification LBA range: start 0x0 length 0x4000 00:25:48.262 NVMe0n1 : 1.01 7993.93 31.23 0.00 0.00 15931.62 1936.29 18111.77 00:25:48.262 =================================================================================================================== 00:25:48.262 Total : 7993.93 31.23 0.00 0.00 15931.62 1936.29 18111.77 00:25:48.262 13:36:05 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:48.262 13:36:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:48.520 13:36:05 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.779 13:36:06 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:48.779 13:36:06 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:49.037 13:36:06 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:49.296 13:36:06 -- host/failover.sh@101 -- # sleep 3 00:25:52.579 13:36:09 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:52.579 13:36:09 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:52.579 13:36:09 -- host/failover.sh@108 -- # killprocess 81961 00:25:52.579 13:36:09 -- common/autotest_common.sh@936 -- # '[' -z 81961 ']' 00:25:52.579 13:36:09 -- common/autotest_common.sh@940 -- # kill -0 81961 00:25:52.579 13:36:09 -- common/autotest_common.sh@941 -- # uname 00:25:52.580 13:36:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:52.580 13:36:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81961 00:25:52.580 13:36:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:52.580 13:36:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:52.580 13:36:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81961' 00:25:52.580 killing process with pid 81961 00:25:52.580 13:36:09 -- common/autotest_common.sh@955 -- # kill 81961 00:25:52.580 13:36:09 -- common/autotest_common.sh@960 -- # wait 81961 00:25:52.837 13:36:10 -- host/failover.sh@110 -- # sync 00:25:52.837 13:36:10 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.095 13:36:10 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:53.095 13:36:10 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:53.095 13:36:10 -- host/failover.sh@116 -- # nvmftestfini 00:25:53.095 13:36:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:53.095 13:36:10 -- nvmf/common.sh@117 -- # sync 00:25:53.095 13:36:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:53.095 13:36:10 -- nvmf/common.sh@120 -- # set +e 00:25:53.095 13:36:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:53.095 13:36:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:53.095 rmmod nvme_tcp 00:25:53.095 rmmod nvme_fabrics 00:25:53.095 rmmod nvme_keyring 00:25:53.095 13:36:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.095 13:36:10 -- nvmf/common.sh@124 -- # set -e 00:25:53.095 13:36:10 -- nvmf/common.sh@125 -- # return 0 00:25:53.095 13:36:10 -- nvmf/common.sh@478 -- # '[' -n 81591 ']' 00:25:53.095 13:36:10 -- nvmf/common.sh@479 -- # killprocess 81591 00:25:53.095 13:36:10 -- common/autotest_common.sh@936 -- # '[' -z 81591 ']' 00:25:53.095 13:36:10 -- common/autotest_common.sh@940 -- # kill -0 81591 00:25:53.095 13:36:10 -- common/autotest_common.sh@941 -- # uname 00:25:53.095 13:36:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:53.095 13:36:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81591 00:25:53.095 13:36:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:53.095 killing process with pid 81591 00:25:53.095 13:36:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:53.095 13:36:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81591' 00:25:53.095 13:36:10 -- common/autotest_common.sh@955 -- # kill 81591 00:25:53.095 13:36:10 -- common/autotest_common.sh@960 -- # wait 81591 00:25:53.662 13:36:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:53.662 13:36:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:53.662 13:36:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:53.662 13:36:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.662 13:36:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:53.662 13:36:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.662 13:36:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.662 13:36:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.662 13:36:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:53.662 00:25:53.662 real 0m33.472s 00:25:53.662 user 2m10.125s 00:25:53.662 sys 0m4.910s 00:25:53.663 13:36:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:53.663 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:25:53.663 ************************************ 00:25:53.663 END TEST nvmf_failover 00:25:53.663 ************************************ 00:25:53.663 13:36:10 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:53.663 13:36:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:53.663 13:36:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.663 13:36:10 -- common/autotest_common.sh@10 -- # set +x 00:25:53.663 ************************************ 00:25:53.663 START TEST nvmf_discovery 00:25:53.663 ************************************ 00:25:53.663 13:36:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:53.663 * Looking for test storage... 00:25:53.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:53.663 13:36:11 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:53.663 13:36:11 -- nvmf/common.sh@7 -- # uname -s 00:25:53.663 13:36:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.663 13:36:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.663 13:36:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.663 13:36:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.663 13:36:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.663 13:36:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.663 13:36:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.663 13:36:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.663 13:36:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.663 13:36:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.663 13:36:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:25:53.663 13:36:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:25:53.663 13:36:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.663 13:36:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.663 13:36:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:53.663 13:36:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.663 13:36:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.663 13:36:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.663 13:36:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.663 13:36:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.663 13:36:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.663 13:36:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.663 13:36:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.663 13:36:11 -- paths/export.sh@5 -- # export PATH 00:25:53.663 13:36:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.663 13:36:11 -- nvmf/common.sh@47 -- # : 0 00:25:53.663 13:36:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.663 13:36:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.663 13:36:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.663 13:36:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.663 13:36:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.663 13:36:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.663 13:36:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.663 13:36:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.663 13:36:11 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:53.663 13:36:11 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:53.663 13:36:11 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:53.663 13:36:11 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:53.663 13:36:11 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:53.663 13:36:11 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:53.663 13:36:11 -- host/discovery.sh@25 -- # nvmftestinit 00:25:53.663 13:36:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:53.663 13:36:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.663 13:36:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:53.663 13:36:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:53.663 13:36:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:53.663 13:36:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.663 13:36:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.663 13:36:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.922 13:36:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:53.922 13:36:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:53.922 13:36:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:53.922 13:36:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:53.922 13:36:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:53.922 13:36:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:53.922 13:36:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.922 13:36:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.922 13:36:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:53.922 13:36:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:53.922 13:36:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:53.922 13:36:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:53.922 13:36:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:53.922 13:36:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.922 13:36:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:53.922 13:36:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:53.922 13:36:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:53.922 13:36:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:53.922 13:36:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:53.922 13:36:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:53.922 Cannot find device "nvmf_tgt_br" 00:25:53.922 13:36:11 -- nvmf/common.sh@155 -- # true 00:25:53.922 13:36:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.922 Cannot find device "nvmf_tgt_br2" 00:25:53.922 13:36:11 -- nvmf/common.sh@156 -- # true 00:25:53.922 13:36:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:53.922 13:36:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:53.922 Cannot find device "nvmf_tgt_br" 00:25:53.922 13:36:11 -- nvmf/common.sh@158 -- # true 00:25:53.922 13:36:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:53.922 Cannot find device "nvmf_tgt_br2" 00:25:53.922 13:36:11 -- nvmf/common.sh@159 -- # true 00:25:53.922 13:36:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:53.922 13:36:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:53.922 13:36:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.922 13:36:11 -- nvmf/common.sh@162 -- # true 00:25:53.922 13:36:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.922 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.922 13:36:11 -- nvmf/common.sh@163 -- # true 00:25:53.922 13:36:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:53.922 13:36:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:53.922 13:36:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:53.922 13:36:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:53.922 13:36:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:53.922 13:36:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:53.922 13:36:11 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.922 13:36:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:53.922 13:36:11 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:53.922 13:36:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:53.922 13:36:11 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:53.922 13:36:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:53.922 13:36:11 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:53.922 13:36:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:54.180 13:36:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:54.180 13:36:11 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:54.180 13:36:11 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:54.180 13:36:11 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:54.180 13:36:11 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:54.180 13:36:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:54.180 13:36:11 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:54.180 13:36:11 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:54.180 13:36:11 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:54.180 13:36:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:54.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:25:54.180 00:25:54.180 --- 10.0.0.2 ping statistics --- 00:25:54.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.180 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:25:54.180 13:36:11 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:54.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:25:54.180 00:25:54.180 --- 10.0.0.3 ping statistics --- 00:25:54.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.180 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:25:54.180 13:36:11 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:54.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:54.180 00:25:54.180 --- 10.0.0.1 ping statistics --- 00:25:54.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.180 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:54.180 13:36:11 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.180 13:36:11 -- nvmf/common.sh@422 -- # return 0 00:25:54.180 13:36:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:54.180 13:36:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.180 13:36:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:54.180 13:36:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:54.180 13:36:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.180 13:36:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:54.180 13:36:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:54.180 13:36:11 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:54.180 13:36:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:54.180 13:36:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:54.180 13:36:11 -- common/autotest_common.sh@10 -- # set +x 00:25:54.180 13:36:11 -- nvmf/common.sh@470 -- # nvmfpid=82416 00:25:54.180 13:36:11 -- nvmf/common.sh@471 -- # waitforlisten 82416 00:25:54.180 13:36:11 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:54.180 13:36:11 -- common/autotest_common.sh@817 -- # '[' -z 82416 ']' 00:25:54.180 13:36:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.180 13:36:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.180 13:36:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.180 13:36:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.180 13:36:11 -- common/autotest_common.sh@10 -- # set +x 00:25:54.180 [2024-04-26 13:36:11.549446] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:54.180 [2024-04-26 13:36:11.549557] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.439 [2024-04-26 13:36:11.690157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.439 [2024-04-26 13:36:11.826479] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.439 [2024-04-26 13:36:11.826544] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.439 [2024-04-26 13:36:11.826559] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.439 [2024-04-26 13:36:11.826570] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.439 [2024-04-26 13:36:11.826580] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.439 [2024-04-26 13:36:11.826632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.373 13:36:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.373 13:36:12 -- common/autotest_common.sh@850 -- # return 0 00:25:55.373 13:36:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:55.373 13:36:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:55.373 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:55.373 13:36:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.373 13:36:12 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.373 13:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.373 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:55.373 [2024-04-26 13:36:12.646691] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.373 13:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.373 13:36:12 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:55.373 13:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.373 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:55.373 [2024-04-26 13:36:12.654834] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:55.373 13:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.373 13:36:12 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:55.373 13:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.373 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:55.373 null0 00:25:55.373 13:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.373 13:36:12 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:55.373 13:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.373 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:55.373 null1 00:25:55.373 13:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.373 13:36:12 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:55.373 13:36:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:55.373 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:55.374 13:36:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:55.374 13:36:12 -- host/discovery.sh@45 -- # hostpid=82466 00:25:55.374 13:36:12 -- host/discovery.sh@46 -- # waitforlisten 82466 /tmp/host.sock 00:25:55.374 13:36:12 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:55.374 13:36:12 -- common/autotest_common.sh@817 -- # '[' -z 82466 ']' 00:25:55.374 13:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:25:55.374 13:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:55.374 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:55.374 13:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:55.374 13:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:55.374 13:36:12 -- common/autotest_common.sh@10 -- # set +x 00:25:55.374 [2024-04-26 13:36:12.746765] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:25:55.374 [2024-04-26 13:36:12.746938] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82466 ] 00:25:55.632 [2024-04-26 13:36:12.889536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.632 [2024-04-26 13:36:13.014717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.566 13:36:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:56.567 13:36:13 -- common/autotest_common.sh@850 -- # return 0 00:25:56.567 13:36:13 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.567 13:36:13 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:56.567 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:13 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:56.567 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:13 -- host/discovery.sh@72 -- # notify_id=0 00:25:56.567 13:36:13 -- host/discovery.sh@83 -- # get_subsystem_names 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.567 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # sort 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # xargs 00:25:56.567 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:13 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:56.567 13:36:13 -- host/discovery.sh@84 -- # get_bdev_list 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.567 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.567 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # sort 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # xargs 00:25:56.567 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:13 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:56.567 13:36:13 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:56.567 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:13 -- host/discovery.sh@87 -- # get_subsystem_names 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.567 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # sort 00:25:56.567 13:36:13 -- host/discovery.sh@59 -- # xargs 00:25:56.567 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:13 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:56.567 13:36:13 -- host/discovery.sh@88 -- # get_bdev_list 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.567 13:36:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:13 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # xargs 00:25:56.567 13:36:13 -- host/discovery.sh@55 -- # sort 00:25:56.567 13:36:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:14 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:56.567 13:36:14 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:56.567 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.567 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.567 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.567 13:36:14 -- host/discovery.sh@91 -- # get_subsystem_names 00:25:56.567 13:36:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.826 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.826 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 13:36:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.826 13:36:14 -- host/discovery.sh@59 -- # sort 00:25:56.826 13:36:14 -- host/discovery.sh@59 -- # xargs 00:25:56.826 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.826 13:36:14 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:56.826 13:36:14 -- host/discovery.sh@92 -- # get_bdev_list 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.826 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.826 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # sort 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # xargs 00:25:56.826 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.826 13:36:14 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:56.826 13:36:14 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:56.826 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.826 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 [2024-04-26 13:36:14.119271] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.826 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.826 13:36:14 -- host/discovery.sh@97 -- # get_subsystem_names 00:25:56.826 13:36:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.826 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.826 13:36:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.826 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 13:36:14 -- host/discovery.sh@59 -- # xargs 00:25:56.826 13:36:14 -- host/discovery.sh@59 -- # sort 00:25:56.826 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.826 13:36:14 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:56.826 13:36:14 -- host/discovery.sh@98 -- # get_bdev_list 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.826 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # sort 00:25:56.826 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 13:36:14 -- host/discovery.sh@55 -- # xargs 00:25:56.826 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.826 13:36:14 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:56.826 13:36:14 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:56.826 13:36:14 -- host/discovery.sh@79 -- # expected_count=0 00:25:56.826 13:36:14 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:56.826 13:36:14 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:56.826 13:36:14 -- common/autotest_common.sh@901 -- # local max=10 00:25:56.826 13:36:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:56.826 13:36:14 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:56.826 13:36:14 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:56.826 13:36:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:56.826 13:36:14 -- host/discovery.sh@74 -- # jq '. | length' 00:25:56.826 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.826 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:56.826 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.084 13:36:14 -- host/discovery.sh@74 -- # notification_count=0 00:25:57.084 13:36:14 -- host/discovery.sh@75 -- # notify_id=0 00:25:57.084 13:36:14 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:57.084 13:36:14 -- common/autotest_common.sh@904 -- # return 0 00:25:57.084 13:36:14 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:57.084 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.084 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:57.084 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.084 13:36:14 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.084 13:36:14 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:57.084 13:36:14 -- common/autotest_common.sh@901 -- # local max=10 00:25:57.084 13:36:14 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:57.084 13:36:14 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:57.084 13:36:14 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:57.084 13:36:14 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:57.084 13:36:14 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:57.084 13:36:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:57.084 13:36:14 -- common/autotest_common.sh@10 -- # set +x 00:25:57.084 13:36:14 -- host/discovery.sh@59 -- # sort 00:25:57.084 13:36:14 -- host/discovery.sh@59 -- # xargs 00:25:57.084 13:36:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:57.085 13:36:14 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:25:57.085 13:36:14 -- common/autotest_common.sh@906 -- # sleep 1 00:25:57.343 [2024-04-26 13:36:14.770273] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:57.343 [2024-04-26 13:36:14.770328] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:57.343 [2024-04-26 13:36:14.770353] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:57.602 [2024-04-26 13:36:14.856415] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:57.602 [2024-04-26 13:36:14.912929] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:57.602 [2024-04-26 13:36:14.912987] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:58.170 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:58.170 13:36:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.170 13:36:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.170 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.170 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.170 13:36:15 -- host/discovery.sh@59 -- # sort 00:25:58.170 13:36:15 -- host/discovery.sh@59 -- # xargs 00:25:58.170 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.170 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.170 13:36:15 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.170 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:58.170 13:36:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.170 13:36:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.170 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.170 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.170 13:36:15 -- host/discovery.sh@55 -- # xargs 00:25:58.170 13:36:15 -- host/discovery.sh@55 -- # sort 00:25:58.170 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:58.170 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.170 13:36:15 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.170 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:58.170 13:36:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.170 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.170 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.170 13:36:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.170 13:36:15 -- host/discovery.sh@63 -- # sort -n 00:25:58.170 13:36:15 -- host/discovery.sh@63 -- # xargs 00:25:58.170 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:25:58.170 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.170 13:36:15 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:58.170 13:36:15 -- host/discovery.sh@79 -- # expected_count=1 00:25:58.170 13:36:15 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.170 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.170 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.170 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:58.170 13:36:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:58.170 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.170 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.170 13:36:15 -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.170 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.170 13:36:15 -- host/discovery.sh@74 -- # notification_count=1 00:25:58.170 13:36:15 -- host/discovery.sh@75 -- # notify_id=1 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:58.170 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.170 13:36:15 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:58.170 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.170 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.170 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.170 13:36:15 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.170 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:58.170 13:36:15 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # sort 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.430 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.430 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # xargs 00:25:58.430 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.430 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.430 13:36:15 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:58.430 13:36:15 -- host/discovery.sh@79 -- # expected_count=1 00:25:58.430 13:36:15 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:58.430 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:58.430 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.430 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:58.430 13:36:15 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:58.430 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.430 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.430 13:36:15 -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.430 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.430 13:36:15 -- host/discovery.sh@74 -- # notification_count=1 00:25:58.430 13:36:15 -- host/discovery.sh@75 -- # notify_id=2 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:58.430 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.430 13:36:15 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:58.430 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.430 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.430 [2024-04-26 13:36:15.727910] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:58.430 [2024-04-26 13:36:15.728412] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:58.430 [2024-04-26 13:36:15.728456] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.430 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.430 13:36:15 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.430 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:58.430 13:36:15 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.430 13:36:15 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.430 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.430 13:36:15 -- host/discovery.sh@59 -- # sort 00:25:58.430 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.430 13:36:15 -- host/discovery.sh@59 -- # xargs 00:25:58.430 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.430 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.430 13:36:15 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.430 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.430 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.430 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # sort 00:25:58.430 13:36:15 -- host/discovery.sh@55 -- # xargs 00:25:58.430 [2024-04-26 13:36:15.814501] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:58.430 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.430 13:36:15 -- common/autotest_common.sh@904 -- # return 0 00:25:58.430 13:36:15 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.430 13:36:15 -- common/autotest_common.sh@901 -- # local max=10 00:25:58.430 13:36:15 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:58.430 13:36:15 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:58.431 13:36:15 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:58.431 13:36:15 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.431 13:36:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:58.431 13:36:15 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.431 13:36:15 -- common/autotest_common.sh@10 -- # set +x 00:25:58.431 13:36:15 -- host/discovery.sh@63 -- # xargs 00:25:58.431 13:36:15 -- host/discovery.sh@63 -- # sort -n 00:25:58.431 13:36:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:58.431 [2024-04-26 13:36:15.872901] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.431 [2024-04-26 13:36:15.872956] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:58.431 [2024-04-26 13:36:15.872967] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:58.689 13:36:15 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:58.689 13:36:15 -- common/autotest_common.sh@906 -- # sleep 1 00:25:59.625 13:36:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:59.625 13:36:16 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:59.625 13:36:16 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:59.625 13:36:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.625 13:36:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.625 13:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.625 13:36:16 -- host/discovery.sh@63 -- # sort -n 00:25:59.625 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:25:59.625 13:36:16 -- host/discovery.sh@63 -- # xargs 00:25:59.625 13:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.625 13:36:16 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:59.625 13:36:16 -- common/autotest_common.sh@904 -- # return 0 00:25:59.625 13:36:16 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:59.625 13:36:16 -- host/discovery.sh@79 -- # expected_count=0 00:25:59.625 13:36:16 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.625 13:36:16 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.625 13:36:16 -- common/autotest_common.sh@901 -- # local max=10 00:25:59.625 13:36:16 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:59.625 13:36:16 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.625 13:36:16 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:59.625 13:36:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.625 13:36:16 -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.625 13:36:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.625 13:36:16 -- common/autotest_common.sh@10 -- # set +x 00:25:59.625 13:36:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.625 13:36:17 -- host/discovery.sh@74 -- # notification_count=0 00:25:59.625 13:36:17 -- host/discovery.sh@75 -- # notify_id=2 00:25:59.625 13:36:17 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:59.625 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:25:59.625 13:36:17 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.625 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.625 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:25:59.625 [2024-04-26 13:36:17.025748] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:59.625 [2024-04-26 13:36:17.025864] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.625 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.625 13:36:17 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:59.625 13:36:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:59.625 13:36:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:59.625 [2024-04-26 13:36:17.030091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.625 [2024-04-26 13:36:17.030130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.625 [2024-04-26 13:36:17.030145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.625 [2024-04-26 13:36:17.030155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.625 [2024-04-26 13:36:17.030166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.625 13:36:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:59.625 [2024-04-26 13:36:17.030176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.625 [2024-04-26 13:36:17.030187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.625 [2024-04-26 13:36:17.030196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.625 [2024-04-26 13:36:17.030206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.625 13:36:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:59.625 13:36:17 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:59.625 13:36:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.625 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.625 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:25:59.625 13:36:17 -- host/discovery.sh@59 -- # sort 00:25:59.625 13:36:17 -- host/discovery.sh@59 -- # xargs 00:25:59.625 13:36:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.625 [2024-04-26 13:36:17.040044] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.625 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.625 [2024-04-26 13:36:17.050073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.625 [2024-04-26 13:36:17.050231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.625 [2024-04-26 13:36:17.050302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.625 [2024-04-26 13:36:17.050343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6c720 with addr=10.0.0.2, port=4420 00:25:59.625 [2024-04-26 13:36:17.050356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.625 [2024-04-26 13:36:17.050377] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.625 [2024-04-26 13:36:17.050414] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.625 [2024-04-26 13:36:17.050426] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.625 [2024-04-26 13:36:17.050437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.625 [2024-04-26 13:36:17.050455] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.625 [2024-04-26 13:36:17.060138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.625 [2024-04-26 13:36:17.060221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.625 [2024-04-26 13:36:17.060267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.625 [2024-04-26 13:36:17.060283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6c720 with addr=10.0.0.2, port=4420 00:25:59.625 [2024-04-26 13:36:17.060294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.625 [2024-04-26 13:36:17.060310] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.625 [2024-04-26 13:36:17.060325] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.625 [2024-04-26 13:36:17.060334] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.625 [2024-04-26 13:36:17.060344] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.625 [2024-04-26 13:36:17.060359] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.625 [2024-04-26 13:36:17.070192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.625 [2024-04-26 13:36:17.070279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.625 [2024-04-26 13:36:17.070338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.625 [2024-04-26 13:36:17.070356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6c720 with addr=10.0.0.2, port=4420 00:25:59.625 [2024-04-26 13:36:17.070367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.625 [2024-04-26 13:36:17.070383] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.625 [2024-04-26 13:36:17.070408] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.625 [2024-04-26 13:36:17.070419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.625 [2024-04-26 13:36:17.070437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.625 [2024-04-26 13:36:17.070453] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.884 [2024-04-26 13:36:17.080245] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.884 [2024-04-26 13:36:17.080331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 [2024-04-26 13:36:17.080378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 [2024-04-26 13:36:17.080393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6c720 with addr=10.0.0.2, port=4420 00:25:59.884 [2024-04-26 13:36:17.080403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.884 [2024-04-26 13:36:17.080419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.884 [2024-04-26 13:36:17.080433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.884 [2024-04-26 13:36:17.080442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.884 [2024-04-26 13:36:17.080451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.884 [2024-04-26 13:36:17.080465] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.884 13:36:17 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.884 [2024-04-26 13:36:17.090300] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.884 [2024-04-26 13:36:17.090399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:25:59.884 [2024-04-26 13:36:17.090445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 [2024-04-26 13:36:17.090461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6c720 with addr=10.0.0.2, port=4420 00:25:59.884 [2024-04-26 13:36:17.090471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.884 [2024-04-26 13:36:17.090486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.884 [2024-04-26 13:36:17.090510] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.884 [2024-04-26 13:36:17.090520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.884 [2024-04-26 13:36:17.090529] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.884 [2024-04-26 13:36:17.090543] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.884 13:36:17 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:59.884 13:36:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:59.884 13:36:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:59.884 13:36:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:59.884 13:36:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:59.884 13:36:17 -- common/autotest_common.sh@903 -- # get_bdev_list 00:25:59.884 13:36:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.884 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.884 13:36:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.884 13:36:17 -- host/discovery.sh@55 -- # sort 00:25:59.884 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:25:59.884 13:36:17 -- host/discovery.sh@55 -- # xargs 00:25:59.884 [2024-04-26 13:36:17.100376] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.884 [2024-04-26 13:36:17.100502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 [2024-04-26 13:36:17.100550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 [2024-04-26 13:36:17.100566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6c720 with addr=10.0.0.2, port=4420 00:25:59.884 [2024-04-26 13:36:17.100577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.884 [2024-04-26 13:36:17.100594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.884 [2024-04-26 13:36:17.100610] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.884 [2024-04-26 13:36:17.100619] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.884 [2024-04-26 13:36:17.100630] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.884 [2024-04-26 13:36:17.100645] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.884 [2024-04-26 13:36:17.110463] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.884 [2024-04-26 13:36:17.110591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 [2024-04-26 13:36:17.110640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.884 [2024-04-26 13:36:17.110656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e6c720 with addr=10.0.0.2, port=4420 00:25:59.884 [2024-04-26 13:36:17.110668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6c720 is same with the state(5) to be set 00:25:59.884 [2024-04-26 13:36:17.110710] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6c720 (9): Bad file descriptor 00:25:59.884 [2024-04-26 13:36:17.110728] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.884 [2024-04-26 13:36:17.110738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.884 [2024-04-26 13:36:17.110748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.884 [2024-04-26 13:36:17.110764] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.884 [2024-04-26 13:36:17.113372] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:59.884 [2024-04-26 13:36:17.113405] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:59.884 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.884 13:36:17 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.884 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:25:59.884 13:36:17 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:59.884 13:36:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:59.884 13:36:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:59.884 13:36:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:59.884 13:36:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:59.885 13:36:17 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:25:59.885 13:36:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.885 13:36:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.885 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.885 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:25:59.885 13:36:17 -- host/discovery.sh@63 -- # sort -n 00:25:59.885 13:36:17 -- host/discovery.sh@63 -- # xargs 00:25:59.885 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.885 13:36:17 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:25:59.885 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:25:59.885 13:36:17 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:59.885 13:36:17 -- host/discovery.sh@79 -- # expected_count=0 00:25:59.885 13:36:17 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.885 13:36:17 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.885 13:36:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:59.885 13:36:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:59.885 13:36:17 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.885 13:36:17 -- common/autotest_common.sh@903 -- # get_notification_count 00:25:59.885 13:36:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.885 13:36:17 -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.885 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.885 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:25:59.885 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.885 13:36:17 -- host/discovery.sh@74 -- # notification_count=0 00:25:59.885 13:36:17 -- host/discovery.sh@75 -- # notify_id=2 00:25:59.885 13:36:17 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:25:59.885 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:25:59.885 13:36:17 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:59.885 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.885 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:25:59.885 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:59.885 13:36:17 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.885 13:36:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.885 13:36:17 -- common/autotest_common.sh@901 -- # local max=10 00:25:59.885 13:36:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:25:59.885 13:36:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:59.885 13:36:17 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:25:59.885 13:36:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.885 13:36:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.885 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:59.885 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:25:59.885 13:36:17 -- host/discovery.sh@59 -- # xargs 00:25:59.885 13:36:17 -- host/discovery.sh@59 -- # sort 00:25:59.885 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.143 13:36:17 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:00.143 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:26:00.143 13:36:17 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:00.143 13:36:17 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:00.143 13:36:17 -- common/autotest_common.sh@901 -- # local max=10 00:26:00.143 13:36:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:00.143 13:36:17 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:00.143 13:36:17 -- common/autotest_common.sh@903 -- # get_bdev_list 00:26:00.143 13:36:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.143 13:36:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.143 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.143 13:36:17 -- host/discovery.sh@55 -- # sort 00:26:00.143 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:26:00.143 13:36:17 -- host/discovery.sh@55 -- # xargs 00:26:00.143 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.143 13:36:17 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:26:00.143 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:26:00.143 13:36:17 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:00.143 13:36:17 -- host/discovery.sh@79 -- # expected_count=2 00:26:00.143 13:36:17 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:00.143 13:36:17 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:00.143 13:36:17 -- common/autotest_common.sh@901 -- # local max=10 00:26:00.143 13:36:17 -- common/autotest_common.sh@902 -- # (( max-- )) 00:26:00.143 13:36:17 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:00.143 13:36:17 -- common/autotest_common.sh@903 -- # get_notification_count 00:26:00.143 13:36:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:00.143 13:36:17 -- host/discovery.sh@74 -- # jq '. | length' 00:26:00.143 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.143 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:26:00.143 13:36:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.143 13:36:17 -- host/discovery.sh@74 -- # notification_count=2 00:26:00.143 13:36:17 -- host/discovery.sh@75 -- # notify_id=4 00:26:00.143 13:36:17 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:26:00.143 13:36:17 -- common/autotest_common.sh@904 -- # return 0 00:26:00.143 13:36:17 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:00.143 13:36:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.143 13:36:17 -- common/autotest_common.sh@10 -- # set +x 00:26:01.077 [2024-04-26 13:36:18.461194] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:01.077 [2024-04-26 13:36:18.461247] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:01.077 [2024-04-26 13:36:18.461268] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.337 [2024-04-26 13:36:18.547320] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:01.337 [2024-04-26 13:36:18.607253] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.337 [2024-04-26 13:36:18.607331] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.337 13:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.337 13:36:18 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.337 13:36:18 -- common/autotest_common.sh@638 -- # local es=0 00:26:01.337 13:36:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.337 13:36:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:01.337 13:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.337 13:36:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:01.337 13:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.337 13:36:18 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.337 13:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.337 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 2024/04/26 13:36:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:01.337 request: 00:26:01.337 { 00:26:01.337 "method": "bdev_nvme_start_discovery", 00:26:01.337 "params": { 00:26:01.337 "name": "nvme", 00:26:01.337 "trtype": "tcp", 00:26:01.337 "traddr": "10.0.0.2", 00:26:01.337 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.337 "adrfam": "ipv4", 00:26:01.337 "trsvcid": "8009", 00:26:01.337 "wait_for_attach": true 00:26:01.337 } 00:26:01.337 } 00:26:01.337 Got JSON-RPC error response 00:26:01.337 GoRPCClient: error on JSON-RPC call 00:26:01.337 13:36:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:01.337 13:36:18 -- common/autotest_common.sh@641 -- # es=1 00:26:01.337 13:36:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:01.337 13:36:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:01.337 13:36:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:01.337 13:36:18 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # sort 00:26:01.337 13:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.337 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # xargs 00:26:01.337 13:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.337 13:36:18 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:01.337 13:36:18 -- host/discovery.sh@146 -- # get_bdev_list 00:26:01.337 13:36:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.337 13:36:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.337 13:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.337 13:36:18 -- host/discovery.sh@55 -- # sort 00:26:01.337 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 13:36:18 -- host/discovery.sh@55 -- # xargs 00:26:01.337 13:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.337 13:36:18 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.337 13:36:18 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.337 13:36:18 -- common/autotest_common.sh@638 -- # local es=0 00:26:01.337 13:36:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.337 13:36:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:01.337 13:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.337 13:36:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:01.337 13:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.337 13:36:18 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.337 13:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.337 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 2024/04/26 13:36:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:26:01.337 request: 00:26:01.337 { 00:26:01.337 "method": "bdev_nvme_start_discovery", 00:26:01.337 "params": { 00:26:01.337 "name": "nvme_second", 00:26:01.337 "trtype": "tcp", 00:26:01.337 "traddr": "10.0.0.2", 00:26:01.337 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.337 "adrfam": "ipv4", 00:26:01.337 "trsvcid": "8009", 00:26:01.337 "wait_for_attach": true 00:26:01.337 } 00:26:01.337 } 00:26:01.337 Got JSON-RPC error response 00:26:01.337 GoRPCClient: error on JSON-RPC call 00:26:01.337 13:36:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:01.337 13:36:18 -- common/autotest_common.sh@641 -- # es=1 00:26:01.337 13:36:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:01.337 13:36:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:01.337 13:36:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:01.337 13:36:18 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.337 13:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # xargs 00:26:01.337 13:36:18 -- host/discovery.sh@67 -- # sort 00:26:01.337 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 13:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.596 13:36:18 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:01.596 13:36:18 -- host/discovery.sh@152 -- # get_bdev_list 00:26:01.596 13:36:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.596 13:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.596 13:36:18 -- host/discovery.sh@55 -- # sort 00:26:01.596 13:36:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.596 13:36:18 -- host/discovery.sh@55 -- # xargs 00:26:01.596 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:01.596 13:36:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:01.596 13:36:18 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.596 13:36:18 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.596 13:36:18 -- common/autotest_common.sh@638 -- # local es=0 00:26:01.596 13:36:18 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.596 13:36:18 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:01.596 13:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.596 13:36:18 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:01.596 13:36:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:01.596 13:36:18 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.596 13:36:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:01.596 13:36:18 -- common/autotest_common.sh@10 -- # set +x 00:26:02.529 [2024-04-26 13:36:19.904770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.529 [2024-04-26 13:36:19.904906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.529 [2024-04-26 13:36:19.904928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ed8bf0 with addr=10.0.0.2, port=8010 00:26:02.529 [2024-04-26 13:36:19.904954] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:02.529 [2024-04-26 13:36:19.904966] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:02.529 [2024-04-26 13:36:19.904977] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:03.462 [2024-04-26 13:36:20.904800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.462 [2024-04-26 13:36:20.905233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.462 [2024-04-26 13:36:20.905384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1efc220 with addr=10.0.0.2, port=8010 00:26:03.462 [2024-04-26 13:36:20.905563] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.462 [2024-04-26 13:36:20.905726] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.462 [2024-04-26 13:36:20.905881] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:04.834 [2024-04-26 13:36:21.904559] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:04.834 2024/04/26 13:36:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:26:04.834 request: 00:26:04.834 { 00:26:04.834 "method": "bdev_nvme_start_discovery", 00:26:04.834 "params": { 00:26:04.834 "name": "nvme_second", 00:26:04.834 "trtype": "tcp", 00:26:04.834 "traddr": "10.0.0.2", 00:26:04.834 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:04.834 "adrfam": "ipv4", 00:26:04.834 "trsvcid": "8010", 00:26:04.834 "attach_timeout_ms": 3000 00:26:04.834 } 00:26:04.834 } 00:26:04.834 Got JSON-RPC error response 00:26:04.834 GoRPCClient: error on JSON-RPC call 00:26:04.834 13:36:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:04.834 13:36:21 -- common/autotest_common.sh@641 -- # es=1 00:26:04.834 13:36:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:04.834 13:36:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:04.834 13:36:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:04.834 13:36:21 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:04.834 13:36:21 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:04.834 13:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:04.834 13:36:21 -- common/autotest_common.sh@10 -- # set +x 00:26:04.834 13:36:21 -- host/discovery.sh@67 -- # sort 00:26:04.834 13:36:21 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:04.834 13:36:21 -- host/discovery.sh@67 -- # xargs 00:26:04.834 13:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:04.834 13:36:21 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:04.834 13:36:21 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:04.834 13:36:21 -- host/discovery.sh@161 -- # kill 82466 00:26:04.834 13:36:21 -- host/discovery.sh@162 -- # nvmftestfini 00:26:04.834 13:36:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:04.834 13:36:21 -- nvmf/common.sh@117 -- # sync 00:26:04.834 13:36:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:04.834 13:36:22 -- nvmf/common.sh@120 -- # set +e 00:26:04.834 13:36:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:04.834 13:36:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:04.834 rmmod nvme_tcp 00:26:04.834 rmmod nvme_fabrics 00:26:04.834 rmmod nvme_keyring 00:26:05.092 13:36:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:05.092 13:36:22 -- nvmf/common.sh@124 -- # set -e 00:26:05.092 13:36:22 -- nvmf/common.sh@125 -- # return 0 00:26:05.092 13:36:22 -- nvmf/common.sh@478 -- # '[' -n 82416 ']' 00:26:05.092 13:36:22 -- nvmf/common.sh@479 -- # killprocess 82416 00:26:05.092 13:36:22 -- common/autotest_common.sh@936 -- # '[' -z 82416 ']' 00:26:05.092 13:36:22 -- common/autotest_common.sh@940 -- # kill -0 82416 00:26:05.092 13:36:22 -- common/autotest_common.sh@941 -- # uname 00:26:05.092 13:36:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:05.092 13:36:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82416 00:26:05.092 killing process with pid 82416 00:26:05.093 13:36:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:05.093 13:36:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:05.093 13:36:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82416' 00:26:05.093 13:36:22 -- common/autotest_common.sh@955 -- # kill 82416 00:26:05.093 13:36:22 -- common/autotest_common.sh@960 -- # wait 82416 00:26:05.354 13:36:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:05.354 13:36:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:05.354 13:36:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:05.354 13:36:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.354 13:36:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:05.354 13:36:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.354 13:36:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.354 13:36:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.354 13:36:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:05.354 00:26:05.354 real 0m11.642s 00:26:05.354 user 0m22.539s 00:26:05.354 sys 0m1.829s 00:26:05.354 ************************************ 00:26:05.354 END TEST nvmf_discovery 00:26:05.354 ************************************ 00:26:05.354 13:36:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:05.354 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:26:05.354 13:36:22 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:05.354 13:36:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:05.354 13:36:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:05.354 13:36:22 -- common/autotest_common.sh@10 -- # set +x 00:26:05.354 ************************************ 00:26:05.354 START TEST nvmf_discovery_remove_ifc 00:26:05.354 ************************************ 00:26:05.354 13:36:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:05.614 * Looking for test storage... 00:26:05.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:05.614 13:36:22 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:05.614 13:36:22 -- nvmf/common.sh@7 -- # uname -s 00:26:05.614 13:36:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.614 13:36:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.614 13:36:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.614 13:36:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.614 13:36:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.614 13:36:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.614 13:36:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.614 13:36:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.614 13:36:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.614 13:36:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.614 13:36:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:26:05.614 13:36:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:26:05.614 13:36:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.614 13:36:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.614 13:36:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:05.614 13:36:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.614 13:36:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.614 13:36:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.614 13:36:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.614 13:36:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.614 13:36:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.614 13:36:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.614 13:36:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.614 13:36:22 -- paths/export.sh@5 -- # export PATH 00:26:05.615 13:36:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.615 13:36:22 -- nvmf/common.sh@47 -- # : 0 00:26:05.615 13:36:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.615 13:36:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.615 13:36:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.615 13:36:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.615 13:36:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.615 13:36:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.615 13:36:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.615 13:36:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.615 13:36:22 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:05.615 13:36:22 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:05.615 13:36:22 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:05.615 13:36:22 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:05.615 13:36:22 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:05.615 13:36:22 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:05.615 13:36:22 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:05.615 13:36:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:05.615 13:36:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.615 13:36:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:05.615 13:36:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:05.615 13:36:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:05.615 13:36:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.615 13:36:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.615 13:36:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.615 13:36:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:05.615 13:36:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:05.615 13:36:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:05.615 13:36:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:05.615 13:36:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:05.615 13:36:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:05.615 13:36:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.615 13:36:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:05.615 13:36:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:05.615 13:36:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:05.615 13:36:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:05.615 13:36:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:05.615 13:36:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:05.615 13:36:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.615 13:36:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:05.615 13:36:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:05.615 13:36:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:05.615 13:36:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:05.615 13:36:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:05.615 13:36:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:05.615 Cannot find device "nvmf_tgt_br" 00:26:05.615 13:36:22 -- nvmf/common.sh@155 -- # true 00:26:05.615 13:36:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.615 Cannot find device "nvmf_tgt_br2" 00:26:05.615 13:36:22 -- nvmf/common.sh@156 -- # true 00:26:05.615 13:36:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:05.615 13:36:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:05.615 Cannot find device "nvmf_tgt_br" 00:26:05.615 13:36:22 -- nvmf/common.sh@158 -- # true 00:26:05.615 13:36:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:05.615 Cannot find device "nvmf_tgt_br2" 00:26:05.615 13:36:22 -- nvmf/common.sh@159 -- # true 00:26:05.615 13:36:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:05.615 13:36:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:05.615 13:36:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.615 13:36:22 -- nvmf/common.sh@162 -- # true 00:26:05.615 13:36:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.615 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.615 13:36:22 -- nvmf/common.sh@163 -- # true 00:26:05.615 13:36:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:05.615 13:36:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:05.615 13:36:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:05.615 13:36:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:05.615 13:36:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:05.874 13:36:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:05.874 13:36:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:05.874 13:36:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:05.874 13:36:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:05.874 13:36:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:05.874 13:36:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:05.874 13:36:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:05.874 13:36:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:05.874 13:36:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:05.874 13:36:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:05.874 13:36:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:05.874 13:36:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:05.874 13:36:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:05.874 13:36:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:05.874 13:36:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:05.874 13:36:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:05.874 13:36:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:05.874 13:36:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:05.874 13:36:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:05.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:26:05.874 00:26:05.874 --- 10.0.0.2 ping statistics --- 00:26:05.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.874 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:05.874 13:36:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:05.874 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:05.874 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:26:05.874 00:26:05.874 --- 10.0.0.3 ping statistics --- 00:26:05.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.874 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:05.874 13:36:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:05.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:05.874 00:26:05.874 --- 10.0.0.1 ping statistics --- 00:26:05.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.874 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:05.874 13:36:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.874 13:36:23 -- nvmf/common.sh@422 -- # return 0 00:26:05.874 13:36:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:05.874 13:36:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.874 13:36:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:05.874 13:36:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:05.874 13:36:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.874 13:36:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:05.874 13:36:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:05.874 13:36:23 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:05.874 13:36:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:05.874 13:36:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:05.874 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.874 13:36:23 -- nvmf/common.sh@470 -- # nvmfpid=82963 00:26:05.874 13:36:23 -- nvmf/common.sh@471 -- # waitforlisten 82963 00:26:05.874 13:36:23 -- common/autotest_common.sh@817 -- # '[' -z 82963 ']' 00:26:05.874 13:36:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:05.874 13:36:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.874 13:36:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:05.874 13:36:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.874 13:36:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:05.874 13:36:23 -- common/autotest_common.sh@10 -- # set +x 00:26:05.874 [2024-04-26 13:36:23.297268] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:26:05.874 [2024-04-26 13:36:23.297383] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.135 [2024-04-26 13:36:23.441603] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.135 [2024-04-26 13:36:23.572385] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.135 [2024-04-26 13:36:23.572467] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.135 [2024-04-26 13:36:23.572484] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.135 [2024-04-26 13:36:23.572495] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.135 [2024-04-26 13:36:23.572505] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.135 [2024-04-26 13:36:23.572552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.072 13:36:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:07.072 13:36:24 -- common/autotest_common.sh@850 -- # return 0 00:26:07.072 13:36:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:07.072 13:36:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:07.072 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:26:07.072 13:36:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.072 13:36:24 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:07.072 13:36:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:07.072 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:26:07.072 [2024-04-26 13:36:24.350299] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.072 [2024-04-26 13:36:24.358472] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:07.072 null0 00:26:07.072 [2024-04-26 13:36:24.390503] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:07.072 13:36:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:07.072 13:36:24 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83013 00:26:07.072 13:36:24 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:07.072 13:36:24 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83013 /tmp/host.sock 00:26:07.072 13:36:24 -- common/autotest_common.sh@817 -- # '[' -z 83013 ']' 00:26:07.072 13:36:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:26:07.072 13:36:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:07.072 13:36:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:07.072 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:07.072 13:36:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:07.072 13:36:24 -- common/autotest_common.sh@10 -- # set +x 00:26:07.072 [2024-04-26 13:36:24.470524] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:26:07.072 [2024-04-26 13:36:24.470625] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83013 ] 00:26:07.331 [2024-04-26 13:36:24.609728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.331 [2024-04-26 13:36:24.740626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.266 13:36:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:08.266 13:36:25 -- common/autotest_common.sh@850 -- # return 0 00:26:08.266 13:36:25 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:08.267 13:36:25 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:08.267 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.267 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:26:08.267 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.267 13:36:25 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:08.267 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.267 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:26:08.267 13:36:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:08.267 13:36:25 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:08.267 13:36:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:08.267 13:36:25 -- common/autotest_common.sh@10 -- # set +x 00:26:09.202 [2024-04-26 13:36:26.564158] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:09.202 [2024-04-26 13:36:26.564213] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:09.202 [2024-04-26 13:36:26.564235] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:09.202 [2024-04-26 13:36:26.650445] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:09.459 [2024-04-26 13:36:26.706898] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:09.459 [2024-04-26 13:36:26.707011] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:09.459 [2024-04-26 13:36:26.707039] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:09.459 [2024-04-26 13:36:26.707061] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:09.459 [2024-04-26 13:36:26.707093] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:09.459 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.459 13:36:26 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:09.459 13:36:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.459 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.459 [2024-04-26 13:36:26.712658] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1105630 was disconnected and freed. delete nvme_qpair. 00:26:09.460 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.460 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.460 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.460 13:36:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:09.460 13:36:26 -- common/autotest_common.sh@10 -- # set +x 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.460 13:36:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:09.460 13:36:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:10.394 13:36:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:10.394 13:36:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:10.394 13:36:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:10.394 13:36:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:10.676 13:36:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.676 13:36:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:10.676 13:36:27 -- common/autotest_common.sh@10 -- # set +x 00:26:10.676 13:36:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:10.676 13:36:27 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:10.676 13:36:27 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:11.609 13:36:28 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:11.609 13:36:28 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:11.609 13:36:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:11.609 13:36:28 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:11.609 13:36:28 -- common/autotest_common.sh@10 -- # set +x 00:26:11.609 13:36:28 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:11.609 13:36:28 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:11.609 13:36:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:11.609 13:36:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:11.609 13:36:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:12.543 13:36:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.543 13:36:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.543 13:36:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:12.543 13:36:29 -- common/autotest_common.sh@10 -- # set +x 00:26:12.543 13:36:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.543 13:36:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.543 13:36:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.543 13:36:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:12.801 13:36:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.801 13:36:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.774 13:36:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.774 13:36:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.774 13:36:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.774 13:36:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.774 13:36:31 -- common/autotest_common.sh@10 -- # set +x 00:26:13.774 13:36:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.774 13:36:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.774 13:36:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.774 13:36:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.774 13:36:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.709 13:36:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.709 13:36:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.709 13:36:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.709 13:36:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.709 13:36:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:14.709 13:36:32 -- common/autotest_common.sh@10 -- # set +x 00:26:14.709 13:36:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.709 13:36:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:14.709 [2024-04-26 13:36:32.134518] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:14.709 [2024-04-26 13:36:32.134597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.709 [2024-04-26 13:36:32.134615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.709 [2024-04-26 13:36:32.134629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.709 [2024-04-26 13:36:32.134640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.709 [2024-04-26 13:36:32.134650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.709 [2024-04-26 13:36:32.134659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.709 [2024-04-26 13:36:32.134669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.709 [2024-04-26 13:36:32.134678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.709 [2024-04-26 13:36:32.134688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.709 [2024-04-26 13:36:32.134697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.709 [2024-04-26 13:36:32.134707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077300 is same with the state(5) to be set 00:26:14.709 13:36:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.709 13:36:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.709 [2024-04-26 13:36:32.144512] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077300 (9): Bad file descriptor 00:26:14.709 [2024-04-26 13:36:32.154545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:16.090 13:36:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.090 13:36:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.090 13:36:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.090 13:36:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.090 13:36:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.090 13:36:33 -- common/autotest_common.sh@10 -- # set +x 00:26:16.090 13:36:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.091 [2024-04-26 13:36:33.161926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:17.024 [2024-04-26 13:36:34.185957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:17.024 [2024-04-26 13:36:34.186105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1077300 with addr=10.0.0.2, port=4420 00:26:17.024 [2024-04-26 13:36:34.186146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1077300 is same with the state(5) to be set 00:26:17.024 [2024-04-26 13:36:34.187109] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1077300 (9): Bad file descriptor 00:26:17.024 [2024-04-26 13:36:34.187186] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:17.024 [2024-04-26 13:36:34.187242] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:17.024 [2024-04-26 13:36:34.187324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.024 [2024-04-26 13:36:34.187356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.024 [2024-04-26 13:36:34.187384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.024 [2024-04-26 13:36:34.187404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.024 [2024-04-26 13:36:34.187426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.024 [2024-04-26 13:36:34.187446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.024 [2024-04-26 13:36:34.187468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.024 [2024-04-26 13:36:34.187488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.024 [2024-04-26 13:36:34.187511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:17.024 [2024-04-26 13:36:34.187531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:17.024 [2024-04-26 13:36:34.187553] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:17.024 [2024-04-26 13:36:34.187615] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1076180 (9): Bad file descriptor 00:26:17.024 [2024-04-26 13:36:34.188615] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:17.024 [2024-04-26 13:36:34.188655] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:17.024 13:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.024 13:36:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.024 13:36:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.956 13:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.956 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.956 13:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.956 13:36:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.956 13:36:35 -- common/autotest_common.sh@10 -- # set +x 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.956 13:36:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:17.956 13:36:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.925 [2024-04-26 13:36:36.196050] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:18.925 [2024-04-26 13:36:36.196106] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:18.925 [2024-04-26 13:36:36.196128] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:18.925 [2024-04-26 13:36:36.282197] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:18.925 [2024-04-26 13:36:36.337816] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:18.925 [2024-04-26 13:36:36.337909] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:18.925 [2024-04-26 13:36:36.337935] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:18.925 [2024-04-26 13:36:36.337953] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:18.925 [2024-04-26 13:36:36.337963] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:18.925 [2024-04-26 13:36:36.344487] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10e5f80 was disconnected and freed. delete nvme_qpair. 00:26:18.925 13:36:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.925 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.925 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.925 13:36:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.925 13:36:36 -- common/autotest_common.sh@10 -- # set +x 00:26:18.925 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.925 13:36:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.185 13:36:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.185 13:36:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:19.185 13:36:36 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:19.185 13:36:36 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83013 00:26:19.185 13:36:36 -- common/autotest_common.sh@936 -- # '[' -z 83013 ']' 00:26:19.185 13:36:36 -- common/autotest_common.sh@940 -- # kill -0 83013 00:26:19.185 13:36:36 -- common/autotest_common.sh@941 -- # uname 00:26:19.185 13:36:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.185 13:36:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83013 00:26:19.185 killing process with pid 83013 00:26:19.185 13:36:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:19.185 13:36:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:19.185 13:36:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83013' 00:26:19.185 13:36:36 -- common/autotest_common.sh@955 -- # kill 83013 00:26:19.185 13:36:36 -- common/autotest_common.sh@960 -- # wait 83013 00:26:19.445 13:36:36 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:19.445 13:36:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:19.445 13:36:36 -- nvmf/common.sh@117 -- # sync 00:26:19.445 13:36:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.445 13:36:36 -- nvmf/common.sh@120 -- # set +e 00:26:19.445 13:36:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.445 13:36:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.445 rmmod nvme_tcp 00:26:19.445 rmmod nvme_fabrics 00:26:19.445 rmmod nvme_keyring 00:26:19.445 13:36:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.445 13:36:36 -- nvmf/common.sh@124 -- # set -e 00:26:19.445 13:36:36 -- nvmf/common.sh@125 -- # return 0 00:26:19.445 13:36:36 -- nvmf/common.sh@478 -- # '[' -n 82963 ']' 00:26:19.445 13:36:36 -- nvmf/common.sh@479 -- # killprocess 82963 00:26:19.445 13:36:36 -- common/autotest_common.sh@936 -- # '[' -z 82963 ']' 00:26:19.445 13:36:36 -- common/autotest_common.sh@940 -- # kill -0 82963 00:26:19.445 13:36:36 -- common/autotest_common.sh@941 -- # uname 00:26:19.445 13:36:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:19.445 13:36:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82963 00:26:19.445 13:36:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:19.445 13:36:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:19.445 killing process with pid 82963 00:26:19.445 13:36:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82963' 00:26:19.445 13:36:36 -- common/autotest_common.sh@955 -- # kill 82963 00:26:19.445 13:36:36 -- common/autotest_common.sh@960 -- # wait 82963 00:26:20.013 13:36:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:20.013 13:36:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:20.013 13:36:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:20.013 13:36:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:20.013 13:36:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:20.013 13:36:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.013 13:36:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.013 13:36:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.013 13:36:37 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:20.013 00:26:20.013 real 0m14.507s 00:26:20.013 user 0m24.781s 00:26:20.013 sys 0m1.703s 00:26:20.013 13:36:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:20.013 13:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:20.013 ************************************ 00:26:20.013 END TEST nvmf_discovery_remove_ifc 00:26:20.013 ************************************ 00:26:20.013 13:36:37 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:20.013 13:36:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:20.013 13:36:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:20.013 13:36:37 -- common/autotest_common.sh@10 -- # set +x 00:26:20.013 ************************************ 00:26:20.013 START TEST nvmf_identify_kernel_target 00:26:20.013 ************************************ 00:26:20.013 13:36:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:20.013 * Looking for test storage... 00:26:20.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:20.013 13:36:37 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:20.013 13:36:37 -- nvmf/common.sh@7 -- # uname -s 00:26:20.013 13:36:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.013 13:36:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.013 13:36:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.013 13:36:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.013 13:36:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.013 13:36:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.013 13:36:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.013 13:36:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.014 13:36:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.014 13:36:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.272 13:36:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:26:20.272 13:36:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:26:20.272 13:36:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.272 13:36:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.272 13:36:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:20.272 13:36:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.272 13:36:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:20.272 13:36:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.272 13:36:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.272 13:36:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.272 13:36:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.272 13:36:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.272 13:36:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.272 13:36:37 -- paths/export.sh@5 -- # export PATH 00:26:20.272 13:36:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.272 13:36:37 -- nvmf/common.sh@47 -- # : 0 00:26:20.272 13:36:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:20.272 13:36:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:20.272 13:36:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.272 13:36:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.272 13:36:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.272 13:36:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:20.272 13:36:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:20.272 13:36:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:20.272 13:36:37 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:20.272 13:36:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:20.272 13:36:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.272 13:36:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:20.272 13:36:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:20.272 13:36:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:20.272 13:36:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.272 13:36:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:20.272 13:36:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.272 13:36:37 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:20.272 13:36:37 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:20.272 13:36:37 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:20.272 13:36:37 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:20.272 13:36:37 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:20.272 13:36:37 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:20.272 13:36:37 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.272 13:36:37 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.272 13:36:37 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:20.272 13:36:37 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:20.272 13:36:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:20.272 13:36:37 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:20.272 13:36:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:20.272 13:36:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.272 13:36:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:20.273 13:36:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:20.273 13:36:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:20.273 13:36:37 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:20.273 13:36:37 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:20.273 13:36:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:20.273 Cannot find device "nvmf_tgt_br" 00:26:20.273 13:36:37 -- nvmf/common.sh@155 -- # true 00:26:20.273 13:36:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.273 Cannot find device "nvmf_tgt_br2" 00:26:20.273 13:36:37 -- nvmf/common.sh@156 -- # true 00:26:20.273 13:36:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:20.273 13:36:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:20.273 Cannot find device "nvmf_tgt_br" 00:26:20.273 13:36:37 -- nvmf/common.sh@158 -- # true 00:26:20.273 13:36:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:20.273 Cannot find device "nvmf_tgt_br2" 00:26:20.273 13:36:37 -- nvmf/common.sh@159 -- # true 00:26:20.273 13:36:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:20.273 13:36:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:20.273 13:36:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.273 13:36:37 -- nvmf/common.sh@162 -- # true 00:26:20.273 13:36:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.273 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.273 13:36:37 -- nvmf/common.sh@163 -- # true 00:26:20.273 13:36:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:20.273 13:36:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:20.273 13:36:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:20.273 13:36:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:20.273 13:36:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:20.273 13:36:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:20.273 13:36:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:20.273 13:36:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:20.273 13:36:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:20.273 13:36:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:20.273 13:36:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:20.273 13:36:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:20.273 13:36:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:20.273 13:36:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:20.273 13:36:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:20.531 13:36:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:20.531 13:36:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:20.531 13:36:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:20.531 13:36:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:20.531 13:36:37 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:20.531 13:36:37 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:20.531 13:36:37 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:20.531 13:36:37 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:20.531 13:36:37 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:20.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:26:20.531 00:26:20.531 --- 10.0.0.2 ping statistics --- 00:26:20.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.531 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:26:20.531 13:36:37 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:20.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:20.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:26:20.531 00:26:20.531 --- 10.0.0.3 ping statistics --- 00:26:20.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.531 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:26:20.531 13:36:37 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:20.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:26:20.531 00:26:20.531 --- 10.0.0.1 ping statistics --- 00:26:20.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.531 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:20.531 13:36:37 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.531 13:36:37 -- nvmf/common.sh@422 -- # return 0 00:26:20.531 13:36:37 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:20.531 13:36:37 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.531 13:36:37 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:20.531 13:36:37 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:20.531 13:36:37 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.532 13:36:37 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:20.532 13:36:37 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:20.532 13:36:37 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:20.532 13:36:37 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:20.532 13:36:37 -- nvmf/common.sh@717 -- # local ip 00:26:20.532 13:36:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:20.532 13:36:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:20.532 13:36:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.532 13:36:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.532 13:36:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:20.532 13:36:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.532 13:36:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:20.532 13:36:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:20.532 13:36:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:20.532 13:36:37 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:20.532 13:36:37 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:20.532 13:36:37 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:20.532 13:36:37 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:20.532 13:36:37 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:20.532 13:36:37 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:20.532 13:36:37 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:20.532 13:36:37 -- nvmf/common.sh@628 -- # local block nvme 00:26:20.532 13:36:37 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:20.532 13:36:37 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:20.532 13:36:37 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:20.532 13:36:37 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:20.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:20.790 Waiting for block devices as requested 00:26:21.048 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:21.048 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:21.048 13:36:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:21.048 13:36:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:21.048 13:36:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:21.048 13:36:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:21.048 13:36:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:21.048 13:36:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:21.048 13:36:38 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:21.048 13:36:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:21.048 13:36:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:21.313 No valid GPT data, bailing 00:26:21.313 13:36:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:21.313 13:36:38 -- scripts/common.sh@391 -- # pt= 00:26:21.313 13:36:38 -- scripts/common.sh@392 -- # return 1 00:26:21.313 13:36:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:21.313 13:36:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:21.313 13:36:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:21.313 13:36:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:26:21.313 13:36:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:21.313 13:36:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:21.313 13:36:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:21.313 13:36:38 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:26:21.313 13:36:38 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:21.313 13:36:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:21.313 No valid GPT data, bailing 00:26:21.313 13:36:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:21.313 13:36:38 -- scripts/common.sh@391 -- # pt= 00:26:21.313 13:36:38 -- scripts/common.sh@392 -- # return 1 00:26:21.313 13:36:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:26:21.313 13:36:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:21.313 13:36:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:21.313 13:36:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:26:21.313 13:36:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:21.313 13:36:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:21.313 13:36:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:21.313 13:36:38 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:26:21.313 13:36:38 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:21.313 13:36:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:21.313 No valid GPT data, bailing 00:26:21.313 13:36:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:21.313 13:36:38 -- scripts/common.sh@391 -- # pt= 00:26:21.313 13:36:38 -- scripts/common.sh@392 -- # return 1 00:26:21.313 13:36:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:26:21.313 13:36:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:21.313 13:36:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:21.313 13:36:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:26:21.313 13:36:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:21.313 13:36:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:21.313 13:36:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:21.313 13:36:38 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:26:21.313 13:36:38 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:21.313 13:36:38 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:21.585 No valid GPT data, bailing 00:26:21.585 13:36:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:21.585 13:36:38 -- scripts/common.sh@391 -- # pt= 00:26:21.585 13:36:38 -- scripts/common.sh@392 -- # return 1 00:26:21.585 13:36:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:26:21.585 13:36:38 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:26:21.585 13:36:38 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:21.585 13:36:38 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:21.585 13:36:38 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:21.585 13:36:38 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:21.585 13:36:38 -- nvmf/common.sh@656 -- # echo 1 00:26:21.585 13:36:38 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:26:21.585 13:36:38 -- nvmf/common.sh@658 -- # echo 1 00:26:21.585 13:36:38 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:21.585 13:36:38 -- nvmf/common.sh@661 -- # echo tcp 00:26:21.585 13:36:38 -- nvmf/common.sh@662 -- # echo 4420 00:26:21.585 13:36:38 -- nvmf/common.sh@663 -- # echo ipv4 00:26:21.585 13:36:38 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:21.585 13:36:38 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -a 10.0.0.1 -t tcp -s 4420 00:26:21.585 00:26:21.585 Discovery Log Number of Records 2, Generation counter 2 00:26:21.585 =====Discovery Log Entry 0====== 00:26:21.585 trtype: tcp 00:26:21.585 adrfam: ipv4 00:26:21.585 subtype: current discovery subsystem 00:26:21.585 treq: not specified, sq flow control disable supported 00:26:21.585 portid: 1 00:26:21.585 trsvcid: 4420 00:26:21.585 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:21.585 traddr: 10.0.0.1 00:26:21.585 eflags: none 00:26:21.585 sectype: none 00:26:21.585 =====Discovery Log Entry 1====== 00:26:21.585 trtype: tcp 00:26:21.585 adrfam: ipv4 00:26:21.585 subtype: nvme subsystem 00:26:21.585 treq: not specified, sq flow control disable supported 00:26:21.585 portid: 1 00:26:21.585 trsvcid: 4420 00:26:21.585 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:21.585 traddr: 10.0.0.1 00:26:21.585 eflags: none 00:26:21.585 sectype: none 00:26:21.585 13:36:38 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:21.585 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:21.585 ===================================================== 00:26:21.585 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:21.585 ===================================================== 00:26:21.585 Controller Capabilities/Features 00:26:21.585 ================================ 00:26:21.585 Vendor ID: 0000 00:26:21.585 Subsystem Vendor ID: 0000 00:26:21.585 Serial Number: ac95d72b00e92b3bfd0b 00:26:21.585 Model Number: Linux 00:26:21.585 Firmware Version: 6.7.0-68 00:26:21.585 Recommended Arb Burst: 0 00:26:21.585 IEEE OUI Identifier: 00 00 00 00:26:21.585 Multi-path I/O 00:26:21.585 May have multiple subsystem ports: No 00:26:21.585 May have multiple controllers: No 00:26:21.585 Associated with SR-IOV VF: No 00:26:21.585 Max Data Transfer Size: Unlimited 00:26:21.585 Max Number of Namespaces: 0 00:26:21.585 Max Number of I/O Queues: 1024 00:26:21.585 NVMe Specification Version (VS): 1.3 00:26:21.585 NVMe Specification Version (Identify): 1.3 00:26:21.585 Maximum Queue Entries: 1024 00:26:21.585 Contiguous Queues Required: No 00:26:21.585 Arbitration Mechanisms Supported 00:26:21.585 Weighted Round Robin: Not Supported 00:26:21.585 Vendor Specific: Not Supported 00:26:21.585 Reset Timeout: 7500 ms 00:26:21.585 Doorbell Stride: 4 bytes 00:26:21.585 NVM Subsystem Reset: Not Supported 00:26:21.585 Command Sets Supported 00:26:21.585 NVM Command Set: Supported 00:26:21.585 Boot Partition: Not Supported 00:26:21.585 Memory Page Size Minimum: 4096 bytes 00:26:21.585 Memory Page Size Maximum: 4096 bytes 00:26:21.585 Persistent Memory Region: Not Supported 00:26:21.586 Optional Asynchronous Events Supported 00:26:21.586 Namespace Attribute Notices: Not Supported 00:26:21.586 Firmware Activation Notices: Not Supported 00:26:21.586 ANA Change Notices: Not Supported 00:26:21.586 PLE Aggregate Log Change Notices: Not Supported 00:26:21.586 LBA Status Info Alert Notices: Not Supported 00:26:21.586 EGE Aggregate Log Change Notices: Not Supported 00:26:21.586 Normal NVM Subsystem Shutdown event: Not Supported 00:26:21.586 Zone Descriptor Change Notices: Not Supported 00:26:21.586 Discovery Log Change Notices: Supported 00:26:21.586 Controller Attributes 00:26:21.586 128-bit Host Identifier: Not Supported 00:26:21.586 Non-Operational Permissive Mode: Not Supported 00:26:21.586 NVM Sets: Not Supported 00:26:21.586 Read Recovery Levels: Not Supported 00:26:21.586 Endurance Groups: Not Supported 00:26:21.586 Predictable Latency Mode: Not Supported 00:26:21.586 Traffic Based Keep ALive: Not Supported 00:26:21.586 Namespace Granularity: Not Supported 00:26:21.586 SQ Associations: Not Supported 00:26:21.586 UUID List: Not Supported 00:26:21.586 Multi-Domain Subsystem: Not Supported 00:26:21.586 Fixed Capacity Management: Not Supported 00:26:21.586 Variable Capacity Management: Not Supported 00:26:21.586 Delete Endurance Group: Not Supported 00:26:21.586 Delete NVM Set: Not Supported 00:26:21.586 Extended LBA Formats Supported: Not Supported 00:26:21.586 Flexible Data Placement Supported: Not Supported 00:26:21.586 00:26:21.586 Controller Memory Buffer Support 00:26:21.586 ================================ 00:26:21.586 Supported: No 00:26:21.586 00:26:21.586 Persistent Memory Region Support 00:26:21.586 ================================ 00:26:21.586 Supported: No 00:26:21.586 00:26:21.586 Admin Command Set Attributes 00:26:21.586 ============================ 00:26:21.586 Security Send/Receive: Not Supported 00:26:21.586 Format NVM: Not Supported 00:26:21.586 Firmware Activate/Download: Not Supported 00:26:21.586 Namespace Management: Not Supported 00:26:21.586 Device Self-Test: Not Supported 00:26:21.586 Directives: Not Supported 00:26:21.586 NVMe-MI: Not Supported 00:26:21.586 Virtualization Management: Not Supported 00:26:21.586 Doorbell Buffer Config: Not Supported 00:26:21.586 Get LBA Status Capability: Not Supported 00:26:21.586 Command & Feature Lockdown Capability: Not Supported 00:26:21.586 Abort Command Limit: 1 00:26:21.586 Async Event Request Limit: 1 00:26:21.586 Number of Firmware Slots: N/A 00:26:21.586 Firmware Slot 1 Read-Only: N/A 00:26:21.586 Firmware Activation Without Reset: N/A 00:26:21.586 Multiple Update Detection Support: N/A 00:26:21.586 Firmware Update Granularity: No Information Provided 00:26:21.586 Per-Namespace SMART Log: No 00:26:21.586 Asymmetric Namespace Access Log Page: Not Supported 00:26:21.586 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:21.586 Command Effects Log Page: Not Supported 00:26:21.586 Get Log Page Extended Data: Supported 00:26:21.586 Telemetry Log Pages: Not Supported 00:26:21.586 Persistent Event Log Pages: Not Supported 00:26:21.586 Supported Log Pages Log Page: May Support 00:26:21.586 Commands Supported & Effects Log Page: Not Supported 00:26:21.586 Feature Identifiers & Effects Log Page:May Support 00:26:21.586 NVMe-MI Commands & Effects Log Page: May Support 00:26:21.586 Data Area 4 for Telemetry Log: Not Supported 00:26:21.586 Error Log Page Entries Supported: 1 00:26:21.586 Keep Alive: Not Supported 00:26:21.586 00:26:21.586 NVM Command Set Attributes 00:26:21.586 ========================== 00:26:21.586 Submission Queue Entry Size 00:26:21.586 Max: 1 00:26:21.586 Min: 1 00:26:21.586 Completion Queue Entry Size 00:26:21.586 Max: 1 00:26:21.586 Min: 1 00:26:21.586 Number of Namespaces: 0 00:26:21.586 Compare Command: Not Supported 00:26:21.586 Write Uncorrectable Command: Not Supported 00:26:21.586 Dataset Management Command: Not Supported 00:26:21.586 Write Zeroes Command: Not Supported 00:26:21.586 Set Features Save Field: Not Supported 00:26:21.586 Reservations: Not Supported 00:26:21.586 Timestamp: Not Supported 00:26:21.586 Copy: Not Supported 00:26:21.586 Volatile Write Cache: Not Present 00:26:21.586 Atomic Write Unit (Normal): 1 00:26:21.586 Atomic Write Unit (PFail): 1 00:26:21.586 Atomic Compare & Write Unit: 1 00:26:21.586 Fused Compare & Write: Not Supported 00:26:21.586 Scatter-Gather List 00:26:21.586 SGL Command Set: Supported 00:26:21.586 SGL Keyed: Not Supported 00:26:21.586 SGL Bit Bucket Descriptor: Not Supported 00:26:21.586 SGL Metadata Pointer: Not Supported 00:26:21.586 Oversized SGL: Not Supported 00:26:21.586 SGL Metadata Address: Not Supported 00:26:21.586 SGL Offset: Supported 00:26:21.586 Transport SGL Data Block: Not Supported 00:26:21.586 Replay Protected Memory Block: Not Supported 00:26:21.586 00:26:21.586 Firmware Slot Information 00:26:21.586 ========================= 00:26:21.586 Active slot: 0 00:26:21.586 00:26:21.586 00:26:21.586 Error Log 00:26:21.586 ========= 00:26:21.586 00:26:21.586 Active Namespaces 00:26:21.586 ================= 00:26:21.586 Discovery Log Page 00:26:21.586 ================== 00:26:21.586 Generation Counter: 2 00:26:21.586 Number of Records: 2 00:26:21.586 Record Format: 0 00:26:21.586 00:26:21.586 Discovery Log Entry 0 00:26:21.586 ---------------------- 00:26:21.586 Transport Type: 3 (TCP) 00:26:21.586 Address Family: 1 (IPv4) 00:26:21.586 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:21.586 Entry Flags: 00:26:21.586 Duplicate Returned Information: 0 00:26:21.586 Explicit Persistent Connection Support for Discovery: 0 00:26:21.586 Transport Requirements: 00:26:21.586 Secure Channel: Not Specified 00:26:21.586 Port ID: 1 (0x0001) 00:26:21.586 Controller ID: 65535 (0xffff) 00:26:21.586 Admin Max SQ Size: 32 00:26:21.586 Transport Service Identifier: 4420 00:26:21.586 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:21.586 Transport Address: 10.0.0.1 00:26:21.586 Discovery Log Entry 1 00:26:21.586 ---------------------- 00:26:21.586 Transport Type: 3 (TCP) 00:26:21.586 Address Family: 1 (IPv4) 00:26:21.586 Subsystem Type: 2 (NVM Subsystem) 00:26:21.586 Entry Flags: 00:26:21.586 Duplicate Returned Information: 0 00:26:21.586 Explicit Persistent Connection Support for Discovery: 0 00:26:21.586 Transport Requirements: 00:26:21.586 Secure Channel: Not Specified 00:26:21.586 Port ID: 1 (0x0001) 00:26:21.586 Controller ID: 65535 (0xffff) 00:26:21.586 Admin Max SQ Size: 32 00:26:21.586 Transport Service Identifier: 4420 00:26:21.586 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:21.586 Transport Address: 10.0.0.1 00:26:21.586 13:36:39 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:21.845 get_feature(0x01) failed 00:26:21.845 get_feature(0x02) failed 00:26:21.845 get_feature(0x04) failed 00:26:21.845 ===================================================== 00:26:21.845 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:21.845 ===================================================== 00:26:21.845 Controller Capabilities/Features 00:26:21.845 ================================ 00:26:21.845 Vendor ID: 0000 00:26:21.845 Subsystem Vendor ID: 0000 00:26:21.845 Serial Number: 3508389f01631f4f9542 00:26:21.845 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:21.845 Firmware Version: 6.7.0-68 00:26:21.845 Recommended Arb Burst: 6 00:26:21.845 IEEE OUI Identifier: 00 00 00 00:26:21.845 Multi-path I/O 00:26:21.845 May have multiple subsystem ports: Yes 00:26:21.845 May have multiple controllers: Yes 00:26:21.845 Associated with SR-IOV VF: No 00:26:21.845 Max Data Transfer Size: Unlimited 00:26:21.845 Max Number of Namespaces: 1024 00:26:21.845 Max Number of I/O Queues: 128 00:26:21.845 NVMe Specification Version (VS): 1.3 00:26:21.845 NVMe Specification Version (Identify): 1.3 00:26:21.845 Maximum Queue Entries: 1024 00:26:21.845 Contiguous Queues Required: No 00:26:21.845 Arbitration Mechanisms Supported 00:26:21.845 Weighted Round Robin: Not Supported 00:26:21.845 Vendor Specific: Not Supported 00:26:21.845 Reset Timeout: 7500 ms 00:26:21.845 Doorbell Stride: 4 bytes 00:26:21.845 NVM Subsystem Reset: Not Supported 00:26:21.845 Command Sets Supported 00:26:21.845 NVM Command Set: Supported 00:26:21.845 Boot Partition: Not Supported 00:26:21.845 Memory Page Size Minimum: 4096 bytes 00:26:21.845 Memory Page Size Maximum: 4096 bytes 00:26:21.845 Persistent Memory Region: Not Supported 00:26:21.845 Optional Asynchronous Events Supported 00:26:21.845 Namespace Attribute Notices: Supported 00:26:21.845 Firmware Activation Notices: Not Supported 00:26:21.845 ANA Change Notices: Supported 00:26:21.845 PLE Aggregate Log Change Notices: Not Supported 00:26:21.845 LBA Status Info Alert Notices: Not Supported 00:26:21.845 EGE Aggregate Log Change Notices: Not Supported 00:26:21.845 Normal NVM Subsystem Shutdown event: Not Supported 00:26:21.845 Zone Descriptor Change Notices: Not Supported 00:26:21.845 Discovery Log Change Notices: Not Supported 00:26:21.845 Controller Attributes 00:26:21.845 128-bit Host Identifier: Supported 00:26:21.845 Non-Operational Permissive Mode: Not Supported 00:26:21.845 NVM Sets: Not Supported 00:26:21.845 Read Recovery Levels: Not Supported 00:26:21.845 Endurance Groups: Not Supported 00:26:21.845 Predictable Latency Mode: Not Supported 00:26:21.845 Traffic Based Keep ALive: Supported 00:26:21.845 Namespace Granularity: Not Supported 00:26:21.845 SQ Associations: Not Supported 00:26:21.845 UUID List: Not Supported 00:26:21.845 Multi-Domain Subsystem: Not Supported 00:26:21.845 Fixed Capacity Management: Not Supported 00:26:21.845 Variable Capacity Management: Not Supported 00:26:21.845 Delete Endurance Group: Not Supported 00:26:21.845 Delete NVM Set: Not Supported 00:26:21.845 Extended LBA Formats Supported: Not Supported 00:26:21.845 Flexible Data Placement Supported: Not Supported 00:26:21.845 00:26:21.845 Controller Memory Buffer Support 00:26:21.845 ================================ 00:26:21.845 Supported: No 00:26:21.845 00:26:21.845 Persistent Memory Region Support 00:26:21.845 ================================ 00:26:21.845 Supported: No 00:26:21.845 00:26:21.845 Admin Command Set Attributes 00:26:21.845 ============================ 00:26:21.845 Security Send/Receive: Not Supported 00:26:21.845 Format NVM: Not Supported 00:26:21.845 Firmware Activate/Download: Not Supported 00:26:21.845 Namespace Management: Not Supported 00:26:21.845 Device Self-Test: Not Supported 00:26:21.845 Directives: Not Supported 00:26:21.845 NVMe-MI: Not Supported 00:26:21.845 Virtualization Management: Not Supported 00:26:21.845 Doorbell Buffer Config: Not Supported 00:26:21.845 Get LBA Status Capability: Not Supported 00:26:21.845 Command & Feature Lockdown Capability: Not Supported 00:26:21.845 Abort Command Limit: 4 00:26:21.845 Async Event Request Limit: 4 00:26:21.845 Number of Firmware Slots: N/A 00:26:21.845 Firmware Slot 1 Read-Only: N/A 00:26:21.845 Firmware Activation Without Reset: N/A 00:26:21.845 Multiple Update Detection Support: N/A 00:26:21.845 Firmware Update Granularity: No Information Provided 00:26:21.845 Per-Namespace SMART Log: Yes 00:26:21.845 Asymmetric Namespace Access Log Page: Supported 00:26:21.845 ANA Transition Time : 10 sec 00:26:21.845 00:26:21.845 Asymmetric Namespace Access Capabilities 00:26:21.845 ANA Optimized State : Supported 00:26:21.845 ANA Non-Optimized State : Supported 00:26:21.845 ANA Inaccessible State : Supported 00:26:21.845 ANA Persistent Loss State : Supported 00:26:21.845 ANA Change State : Supported 00:26:21.845 ANAGRPID is not changed : No 00:26:21.845 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:21.845 00:26:21.845 ANA Group Identifier Maximum : 128 00:26:21.845 Number of ANA Group Identifiers : 128 00:26:21.845 Max Number of Allowed Namespaces : 1024 00:26:21.845 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:21.845 Command Effects Log Page: Supported 00:26:21.845 Get Log Page Extended Data: Supported 00:26:21.845 Telemetry Log Pages: Not Supported 00:26:21.845 Persistent Event Log Pages: Not Supported 00:26:21.845 Supported Log Pages Log Page: May Support 00:26:21.845 Commands Supported & Effects Log Page: Not Supported 00:26:21.845 Feature Identifiers & Effects Log Page:May Support 00:26:21.845 NVMe-MI Commands & Effects Log Page: May Support 00:26:21.845 Data Area 4 for Telemetry Log: Not Supported 00:26:21.845 Error Log Page Entries Supported: 128 00:26:21.845 Keep Alive: Supported 00:26:21.845 Keep Alive Granularity: 1000 ms 00:26:21.845 00:26:21.845 NVM Command Set Attributes 00:26:21.845 ========================== 00:26:21.845 Submission Queue Entry Size 00:26:21.845 Max: 64 00:26:21.845 Min: 64 00:26:21.845 Completion Queue Entry Size 00:26:21.845 Max: 16 00:26:21.845 Min: 16 00:26:21.845 Number of Namespaces: 1024 00:26:21.845 Compare Command: Not Supported 00:26:21.845 Write Uncorrectable Command: Not Supported 00:26:21.845 Dataset Management Command: Supported 00:26:21.845 Write Zeroes Command: Supported 00:26:21.845 Set Features Save Field: Not Supported 00:26:21.845 Reservations: Not Supported 00:26:21.845 Timestamp: Not Supported 00:26:21.845 Copy: Not Supported 00:26:21.845 Volatile Write Cache: Present 00:26:21.845 Atomic Write Unit (Normal): 1 00:26:21.845 Atomic Write Unit (PFail): 1 00:26:21.845 Atomic Compare & Write Unit: 1 00:26:21.845 Fused Compare & Write: Not Supported 00:26:21.845 Scatter-Gather List 00:26:21.845 SGL Command Set: Supported 00:26:21.845 SGL Keyed: Not Supported 00:26:21.845 SGL Bit Bucket Descriptor: Not Supported 00:26:21.845 SGL Metadata Pointer: Not Supported 00:26:21.845 Oversized SGL: Not Supported 00:26:21.845 SGL Metadata Address: Not Supported 00:26:21.845 SGL Offset: Supported 00:26:21.845 Transport SGL Data Block: Not Supported 00:26:21.845 Replay Protected Memory Block: Not Supported 00:26:21.845 00:26:21.845 Firmware Slot Information 00:26:21.845 ========================= 00:26:21.845 Active slot: 0 00:26:21.845 00:26:21.845 Asymmetric Namespace Access 00:26:21.845 =========================== 00:26:21.845 Change Count : 0 00:26:21.845 Number of ANA Group Descriptors : 1 00:26:21.845 ANA Group Descriptor : 0 00:26:21.845 ANA Group ID : 1 00:26:21.845 Number of NSID Values : 1 00:26:21.845 Change Count : 0 00:26:21.845 ANA State : 1 00:26:21.845 Namespace Identifier : 1 00:26:21.845 00:26:21.845 Commands Supported and Effects 00:26:21.845 ============================== 00:26:21.845 Admin Commands 00:26:21.845 -------------- 00:26:21.845 Get Log Page (02h): Supported 00:26:21.845 Identify (06h): Supported 00:26:21.845 Abort (08h): Supported 00:26:21.845 Set Features (09h): Supported 00:26:21.845 Get Features (0Ah): Supported 00:26:21.845 Asynchronous Event Request (0Ch): Supported 00:26:21.845 Keep Alive (18h): Supported 00:26:21.845 I/O Commands 00:26:21.845 ------------ 00:26:21.845 Flush (00h): Supported 00:26:21.845 Write (01h): Supported LBA-Change 00:26:21.845 Read (02h): Supported 00:26:21.845 Write Zeroes (08h): Supported LBA-Change 00:26:21.845 Dataset Management (09h): Supported 00:26:21.845 00:26:21.845 Error Log 00:26:21.845 ========= 00:26:21.845 Entry: 0 00:26:21.845 Error Count: 0x3 00:26:21.845 Submission Queue Id: 0x0 00:26:21.845 Command Id: 0x5 00:26:21.845 Phase Bit: 0 00:26:21.845 Status Code: 0x2 00:26:21.845 Status Code Type: 0x0 00:26:21.845 Do Not Retry: 1 00:26:21.845 Error Location: 0x28 00:26:21.845 LBA: 0x0 00:26:21.845 Namespace: 0x0 00:26:21.845 Vendor Log Page: 0x0 00:26:21.845 ----------- 00:26:21.845 Entry: 1 00:26:21.845 Error Count: 0x2 00:26:21.845 Submission Queue Id: 0x0 00:26:21.845 Command Id: 0x5 00:26:21.845 Phase Bit: 0 00:26:21.845 Status Code: 0x2 00:26:21.845 Status Code Type: 0x0 00:26:21.846 Do Not Retry: 1 00:26:21.846 Error Location: 0x28 00:26:21.846 LBA: 0x0 00:26:21.846 Namespace: 0x0 00:26:21.846 Vendor Log Page: 0x0 00:26:21.846 ----------- 00:26:21.846 Entry: 2 00:26:21.846 Error Count: 0x1 00:26:21.846 Submission Queue Id: 0x0 00:26:21.846 Command Id: 0x4 00:26:21.846 Phase Bit: 0 00:26:21.846 Status Code: 0x2 00:26:21.846 Status Code Type: 0x0 00:26:21.846 Do Not Retry: 1 00:26:21.846 Error Location: 0x28 00:26:21.846 LBA: 0x0 00:26:21.846 Namespace: 0x0 00:26:21.846 Vendor Log Page: 0x0 00:26:21.846 00:26:21.846 Number of Queues 00:26:21.846 ================ 00:26:21.846 Number of I/O Submission Queues: 128 00:26:21.846 Number of I/O Completion Queues: 128 00:26:21.846 00:26:21.846 ZNS Specific Controller Data 00:26:21.846 ============================ 00:26:21.846 Zone Append Size Limit: 0 00:26:21.846 00:26:21.846 00:26:21.846 Active Namespaces 00:26:21.846 ================= 00:26:21.846 get_feature(0x05) failed 00:26:21.846 Namespace ID:1 00:26:21.846 Command Set Identifier: NVM (00h) 00:26:21.846 Deallocate: Supported 00:26:21.846 Deallocated/Unwritten Error: Not Supported 00:26:21.846 Deallocated Read Value: Unknown 00:26:21.846 Deallocate in Write Zeroes: Not Supported 00:26:21.846 Deallocated Guard Field: 0xFFFF 00:26:21.846 Flush: Supported 00:26:21.846 Reservation: Not Supported 00:26:21.846 Namespace Sharing Capabilities: Multiple Controllers 00:26:21.846 Size (in LBAs): 1310720 (5GiB) 00:26:21.846 Capacity (in LBAs): 1310720 (5GiB) 00:26:21.846 Utilization (in LBAs): 1310720 (5GiB) 00:26:21.846 UUID: 878c0ed8-7b89-4156-a069-b541148a57ec 00:26:21.846 Thin Provisioning: Not Supported 00:26:21.846 Per-NS Atomic Units: Yes 00:26:21.846 Atomic Boundary Size (Normal): 0 00:26:21.846 Atomic Boundary Size (PFail): 0 00:26:21.846 Atomic Boundary Offset: 0 00:26:21.846 NGUID/EUI64 Never Reused: No 00:26:21.846 ANA group ID: 1 00:26:21.846 Namespace Write Protected: No 00:26:21.846 Number of LBA Formats: 1 00:26:21.846 Current LBA Format: LBA Format #00 00:26:21.846 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:26:21.846 00:26:21.846 13:36:39 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:21.846 13:36:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:21.846 13:36:39 -- nvmf/common.sh@117 -- # sync 00:26:21.846 13:36:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:21.846 13:36:39 -- nvmf/common.sh@120 -- # set +e 00:26:21.846 13:36:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:21.846 13:36:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:21.846 rmmod nvme_tcp 00:26:21.846 rmmod nvme_fabrics 00:26:21.846 13:36:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:21.846 13:36:39 -- nvmf/common.sh@124 -- # set -e 00:26:21.846 13:36:39 -- nvmf/common.sh@125 -- # return 0 00:26:21.846 13:36:39 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:26:21.846 13:36:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:21.846 13:36:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:21.846 13:36:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:21.846 13:36:39 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.846 13:36:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:22.104 13:36:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.104 13:36:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.104 13:36:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.104 13:36:39 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:22.104 13:36:39 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:22.104 13:36:39 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:22.104 13:36:39 -- nvmf/common.sh@675 -- # echo 0 00:26:22.104 13:36:39 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:22.104 13:36:39 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:22.104 13:36:39 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:22.104 13:36:39 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:22.104 13:36:39 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:26:22.104 13:36:39 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:26:22.104 13:36:39 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:22.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:22.928 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:22.928 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:22.928 00:26:22.928 real 0m2.928s 00:26:22.928 user 0m1.017s 00:26:22.928 sys 0m1.377s 00:26:22.928 13:36:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:22.928 13:36:40 -- common/autotest_common.sh@10 -- # set +x 00:26:22.928 ************************************ 00:26:22.928 END TEST nvmf_identify_kernel_target 00:26:22.928 ************************************ 00:26:22.928 13:36:40 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:22.928 13:36:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:22.928 13:36:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:22.928 13:36:40 -- common/autotest_common.sh@10 -- # set +x 00:26:23.186 ************************************ 00:26:23.186 START TEST nvmf_auth 00:26:23.186 ************************************ 00:26:23.186 13:36:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:23.186 * Looking for test storage... 00:26:23.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:23.186 13:36:40 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:23.187 13:36:40 -- nvmf/common.sh@7 -- # uname -s 00:26:23.187 13:36:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.187 13:36:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.187 13:36:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.187 13:36:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.187 13:36:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.187 13:36:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.187 13:36:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.187 13:36:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.187 13:36:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.187 13:36:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.187 13:36:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:26:23.187 13:36:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:26:23.187 13:36:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.187 13:36:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.187 13:36:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:23.187 13:36:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.187 13:36:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.187 13:36:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.187 13:36:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.187 13:36:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.187 13:36:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.187 13:36:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.187 13:36:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.187 13:36:40 -- paths/export.sh@5 -- # export PATH 00:26:23.187 13:36:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.187 13:36:40 -- nvmf/common.sh@47 -- # : 0 00:26:23.187 13:36:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:23.187 13:36:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:23.187 13:36:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.187 13:36:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.187 13:36:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.187 13:36:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:23.187 13:36:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:23.187 13:36:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:23.187 13:36:40 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:23.187 13:36:40 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:23.187 13:36:40 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:23.187 13:36:40 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:23.187 13:36:40 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:23.187 13:36:40 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:23.187 13:36:40 -- host/auth.sh@21 -- # keys=() 00:26:23.187 13:36:40 -- host/auth.sh@77 -- # nvmftestinit 00:26:23.187 13:36:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:23.187 13:36:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.187 13:36:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:23.187 13:36:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:23.187 13:36:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:23.187 13:36:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.187 13:36:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.187 13:36:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.187 13:36:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:23.187 13:36:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:23.187 13:36:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:23.187 13:36:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:23.187 13:36:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:23.187 13:36:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:23.187 13:36:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.187 13:36:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.187 13:36:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:23.187 13:36:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:23.187 13:36:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:23.187 13:36:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:23.187 13:36:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:23.187 13:36:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.187 13:36:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:23.187 13:36:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:23.187 13:36:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:23.187 13:36:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:23.187 13:36:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:23.187 13:36:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:23.187 Cannot find device "nvmf_tgt_br" 00:26:23.187 13:36:40 -- nvmf/common.sh@155 -- # true 00:26:23.187 13:36:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.187 Cannot find device "nvmf_tgt_br2" 00:26:23.187 13:36:40 -- nvmf/common.sh@156 -- # true 00:26:23.187 13:36:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:23.187 13:36:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:23.187 Cannot find device "nvmf_tgt_br" 00:26:23.187 13:36:40 -- nvmf/common.sh@158 -- # true 00:26:23.187 13:36:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:23.187 Cannot find device "nvmf_tgt_br2" 00:26:23.187 13:36:40 -- nvmf/common.sh@159 -- # true 00:26:23.187 13:36:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:23.187 13:36:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:23.445 13:36:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:23.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.445 13:36:40 -- nvmf/common.sh@162 -- # true 00:26:23.445 13:36:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:23.445 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.445 13:36:40 -- nvmf/common.sh@163 -- # true 00:26:23.445 13:36:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:23.445 13:36:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:23.445 13:36:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:23.445 13:36:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:23.445 13:36:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:23.445 13:36:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:23.445 13:36:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:23.445 13:36:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:23.445 13:36:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:23.445 13:36:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:23.445 13:36:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:23.445 13:36:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:23.445 13:36:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:23.445 13:36:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:23.445 13:36:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:23.445 13:36:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:23.445 13:36:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:23.445 13:36:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:23.445 13:36:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:23.445 13:36:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:23.445 13:36:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:23.445 13:36:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:23.445 13:36:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:23.445 13:36:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:23.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:26:23.445 00:26:23.445 --- 10.0.0.2 ping statistics --- 00:26:23.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.445 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:23.445 13:36:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:23.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:23.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:26:23.445 00:26:23.445 --- 10.0.0.3 ping statistics --- 00:26:23.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.446 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:23.446 13:36:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:23.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:26:23.446 00:26:23.446 --- 10.0.0.1 ping statistics --- 00:26:23.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.446 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:26:23.446 13:36:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.446 13:36:40 -- nvmf/common.sh@422 -- # return 0 00:26:23.446 13:36:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:23.446 13:36:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.446 13:36:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:23.446 13:36:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:23.446 13:36:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.446 13:36:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:23.446 13:36:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:23.446 13:36:40 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:26:23.446 13:36:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:23.446 13:36:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:23.446 13:36:40 -- common/autotest_common.sh@10 -- # set +x 00:26:23.446 13:36:40 -- nvmf/common.sh@470 -- # nvmfpid=83911 00:26:23.446 13:36:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:23.446 13:36:40 -- nvmf/common.sh@471 -- # waitforlisten 83911 00:26:23.446 13:36:40 -- common/autotest_common.sh@817 -- # '[' -z 83911 ']' 00:26:23.446 13:36:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.446 13:36:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:23.446 13:36:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.446 13:36:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:23.446 13:36:40 -- common/autotest_common.sh@10 -- # set +x 00:26:24.819 13:36:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:24.819 13:36:41 -- common/autotest_common.sh@850 -- # return 0 00:26:24.819 13:36:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:24.819 13:36:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:24.819 13:36:41 -- common/autotest_common.sh@10 -- # set +x 00:26:24.819 13:36:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.819 13:36:42 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:24.819 13:36:42 -- host/auth.sh@81 -- # gen_key null 32 00:26:24.819 13:36:42 -- host/auth.sh@53 -- # local digest len file key 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # local -A digests 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # digest=null 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # len=32 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # key=3a8705b78a1e78e5dc50dce13dd7ae27 00:26:24.819 13:36:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:26:24.819 13:36:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.2ae 00:26:24.819 13:36:42 -- host/auth.sh@59 -- # format_dhchap_key 3a8705b78a1e78e5dc50dce13dd7ae27 0 00:26:24.819 13:36:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 3a8705b78a1e78e5dc50dce13dd7ae27 0 00:26:24.819 13:36:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # key=3a8705b78a1e78e5dc50dce13dd7ae27 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # digest=0 00:26:24.819 13:36:42 -- nvmf/common.sh@694 -- # python - 00:26:24.819 13:36:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.2ae 00:26:24.819 13:36:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.2ae 00:26:24.819 13:36:42 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.2ae 00:26:24.819 13:36:42 -- host/auth.sh@82 -- # gen_key null 48 00:26:24.819 13:36:42 -- host/auth.sh@53 -- # local digest len file key 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # local -A digests 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # digest=null 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # len=48 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # key=e8d0587179ddb130d78c6377f1af597247d3b0739ded24f3 00:26:24.819 13:36:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:26:24.819 13:36:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.lO9 00:26:24.819 13:36:42 -- host/auth.sh@59 -- # format_dhchap_key e8d0587179ddb130d78c6377f1af597247d3b0739ded24f3 0 00:26:24.819 13:36:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 e8d0587179ddb130d78c6377f1af597247d3b0739ded24f3 0 00:26:24.819 13:36:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # key=e8d0587179ddb130d78c6377f1af597247d3b0739ded24f3 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # digest=0 00:26:24.819 13:36:42 -- nvmf/common.sh@694 -- # python - 00:26:24.819 13:36:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.lO9 00:26:24.819 13:36:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.lO9 00:26:24.819 13:36:42 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.lO9 00:26:24.819 13:36:42 -- host/auth.sh@83 -- # gen_key sha256 32 00:26:24.819 13:36:42 -- host/auth.sh@53 -- # local digest len file key 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # local -A digests 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # digest=sha256 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # len=32 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # key=3f978e570aa79f91ec947155837149a3 00:26:24.819 13:36:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:26:24.819 13:36:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.1Kg 00:26:24.819 13:36:42 -- host/auth.sh@59 -- # format_dhchap_key 3f978e570aa79f91ec947155837149a3 1 00:26:24.819 13:36:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 3f978e570aa79f91ec947155837149a3 1 00:26:24.819 13:36:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # key=3f978e570aa79f91ec947155837149a3 00:26:24.819 13:36:42 -- nvmf/common.sh@693 -- # digest=1 00:26:24.819 13:36:42 -- nvmf/common.sh@694 -- # python - 00:26:24.819 13:36:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.1Kg 00:26:24.819 13:36:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.1Kg 00:26:24.819 13:36:42 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.1Kg 00:26:24.819 13:36:42 -- host/auth.sh@84 -- # gen_key sha384 48 00:26:24.819 13:36:42 -- host/auth.sh@53 -- # local digest len file key 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:24.819 13:36:42 -- host/auth.sh@54 -- # local -A digests 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # digest=sha384 00:26:24.819 13:36:42 -- host/auth.sh@56 -- # len=48 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:24.819 13:36:42 -- host/auth.sh@57 -- # key=7742cdbb4929893a38fe23abc99190eabcd0862649db1f8a 00:26:24.819 13:36:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:26:24.820 13:36:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.kE7 00:26:24.820 13:36:42 -- host/auth.sh@59 -- # format_dhchap_key 7742cdbb4929893a38fe23abc99190eabcd0862649db1f8a 2 00:26:24.820 13:36:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 7742cdbb4929893a38fe23abc99190eabcd0862649db1f8a 2 00:26:24.820 13:36:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:24.820 13:36:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:24.820 13:36:42 -- nvmf/common.sh@693 -- # key=7742cdbb4929893a38fe23abc99190eabcd0862649db1f8a 00:26:24.820 13:36:42 -- nvmf/common.sh@693 -- # digest=2 00:26:24.820 13:36:42 -- nvmf/common.sh@694 -- # python - 00:26:25.078 13:36:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.kE7 00:26:25.078 13:36:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.kE7 00:26:25.078 13:36:42 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.kE7 00:26:25.078 13:36:42 -- host/auth.sh@85 -- # gen_key sha512 64 00:26:25.078 13:36:42 -- host/auth.sh@53 -- # local digest len file key 00:26:25.078 13:36:42 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:25.078 13:36:42 -- host/auth.sh@54 -- # local -A digests 00:26:25.078 13:36:42 -- host/auth.sh@56 -- # digest=sha512 00:26:25.078 13:36:42 -- host/auth.sh@56 -- # len=64 00:26:25.078 13:36:42 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:25.078 13:36:42 -- host/auth.sh@57 -- # key=e66579e34fb66ff96e97debdd0181e721db14a7380fa0d9d9a87cfea676b84c7 00:26:25.078 13:36:42 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:26:25.078 13:36:42 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.LRz 00:26:25.078 13:36:42 -- host/auth.sh@59 -- # format_dhchap_key e66579e34fb66ff96e97debdd0181e721db14a7380fa0d9d9a87cfea676b84c7 3 00:26:25.078 13:36:42 -- nvmf/common.sh@708 -- # format_key DHHC-1 e66579e34fb66ff96e97debdd0181e721db14a7380fa0d9d9a87cfea676b84c7 3 00:26:25.078 13:36:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:26:25.078 13:36:42 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:26:25.078 13:36:42 -- nvmf/common.sh@693 -- # key=e66579e34fb66ff96e97debdd0181e721db14a7380fa0d9d9a87cfea676b84c7 00:26:25.078 13:36:42 -- nvmf/common.sh@693 -- # digest=3 00:26:25.078 13:36:42 -- nvmf/common.sh@694 -- # python - 00:26:25.078 13:36:42 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.LRz 00:26:25.078 13:36:42 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.LRz 00:26:25.078 13:36:42 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.LRz 00:26:25.078 13:36:42 -- host/auth.sh@87 -- # waitforlisten 83911 00:26:25.078 13:36:42 -- common/autotest_common.sh@817 -- # '[' -z 83911 ']' 00:26:25.078 13:36:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.078 13:36:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:25.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.078 13:36:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.078 13:36:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:25.078 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:25.336 13:36:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:25.336 13:36:42 -- common/autotest_common.sh@850 -- # return 0 00:26:25.336 13:36:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:25.336 13:36:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2ae 00:26:25.336 13:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.336 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:25.336 13:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.336 13:36:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:25.336 13:36:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.lO9 00:26:25.336 13:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.336 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:25.336 13:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.336 13:36:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:25.336 13:36:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1Kg 00:26:25.336 13:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.336 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:25.336 13:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.336 13:36:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:25.336 13:36:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kE7 00:26:25.336 13:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.336 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:25.336 13:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.336 13:36:42 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:26:25.336 13:36:42 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.LRz 00:26:25.336 13:36:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.336 13:36:42 -- common/autotest_common.sh@10 -- # set +x 00:26:25.336 13:36:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.336 13:36:42 -- host/auth.sh@92 -- # nvmet_auth_init 00:26:25.336 13:36:42 -- host/auth.sh@35 -- # get_main_ns_ip 00:26:25.336 13:36:42 -- nvmf/common.sh@717 -- # local ip 00:26:25.336 13:36:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:25.336 13:36:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:25.336 13:36:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.336 13:36:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.336 13:36:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:25.336 13:36:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.336 13:36:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:25.336 13:36:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:25.336 13:36:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:25.336 13:36:42 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:25.336 13:36:42 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:25.336 13:36:42 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:26:25.336 13:36:42 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:25.337 13:36:42 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:25.337 13:36:42 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:25.337 13:36:42 -- nvmf/common.sh@628 -- # local block nvme 00:26:25.337 13:36:42 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:26:25.337 13:36:42 -- nvmf/common.sh@631 -- # modprobe nvmet 00:26:25.337 13:36:42 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:25.337 13:36:42 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:25.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:25.594 Waiting for block devices as requested 00:26:25.853 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:25.853 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:26.417 13:36:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:26.417 13:36:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:26.417 13:36:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:26:26.417 13:36:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:26.417 13:36:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:26.417 13:36:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:26.417 13:36:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:26:26.417 13:36:43 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:26.417 13:36:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:26.417 No valid GPT data, bailing 00:26:26.417 13:36:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:26.417 13:36:43 -- scripts/common.sh@391 -- # pt= 00:26:26.417 13:36:43 -- scripts/common.sh@392 -- # return 1 00:26:26.417 13:36:43 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:26:26.417 13:36:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:26.417 13:36:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:26.417 13:36:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:26:26.417 13:36:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:26:26.417 13:36:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:26.417 13:36:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:26.417 13:36:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:26:26.417 13:36:43 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:26.417 13:36:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:26.674 No valid GPT data, bailing 00:26:26.674 13:36:43 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:26.674 13:36:43 -- scripts/common.sh@391 -- # pt= 00:26:26.674 13:36:43 -- scripts/common.sh@392 -- # return 1 00:26:26.674 13:36:43 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:26:26.674 13:36:43 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:26.674 13:36:43 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:26.674 13:36:43 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:26:26.674 13:36:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:26:26.674 13:36:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:26.674 13:36:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:26.674 13:36:43 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:26:26.674 13:36:43 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:26.674 13:36:43 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:26.674 No valid GPT data, bailing 00:26:26.674 13:36:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:26.674 13:36:44 -- scripts/common.sh@391 -- # pt= 00:26:26.674 13:36:44 -- scripts/common.sh@392 -- # return 1 00:26:26.674 13:36:44 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:26:26.674 13:36:44 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:26:26.674 13:36:44 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:26.674 13:36:44 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:26:26.674 13:36:44 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:26.674 13:36:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:26.674 13:36:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:26.674 13:36:44 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:26:26.674 13:36:44 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:26.674 13:36:44 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:26.674 No valid GPT data, bailing 00:26:26.674 13:36:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:26.674 13:36:44 -- scripts/common.sh@391 -- # pt= 00:26:26.674 13:36:44 -- scripts/common.sh@392 -- # return 1 00:26:26.674 13:36:44 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:26:26.674 13:36:44 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:26:26.674 13:36:44 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:26.674 13:36:44 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:26.674 13:36:44 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:26.674 13:36:44 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:26.674 13:36:44 -- nvmf/common.sh@656 -- # echo 1 00:26:26.674 13:36:44 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:26:26.674 13:36:44 -- nvmf/common.sh@658 -- # echo 1 00:26:26.674 13:36:44 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:26:26.674 13:36:44 -- nvmf/common.sh@661 -- # echo tcp 00:26:26.674 13:36:44 -- nvmf/common.sh@662 -- # echo 4420 00:26:26.674 13:36:44 -- nvmf/common.sh@663 -- # echo ipv4 00:26:26.674 13:36:44 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:26.932 13:36:44 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -a 10.0.0.1 -t tcp -s 4420 00:26:26.932 00:26:26.932 Discovery Log Number of Records 2, Generation counter 2 00:26:26.932 =====Discovery Log Entry 0====== 00:26:26.933 trtype: tcp 00:26:26.933 adrfam: ipv4 00:26:26.933 subtype: current discovery subsystem 00:26:26.933 treq: not specified, sq flow control disable supported 00:26:26.933 portid: 1 00:26:26.933 trsvcid: 4420 00:26:26.933 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:26.933 traddr: 10.0.0.1 00:26:26.933 eflags: none 00:26:26.933 sectype: none 00:26:26.933 =====Discovery Log Entry 1====== 00:26:26.933 trtype: tcp 00:26:26.933 adrfam: ipv4 00:26:26.933 subtype: nvme subsystem 00:26:26.933 treq: not specified, sq flow control disable supported 00:26:26.933 portid: 1 00:26:26.933 trsvcid: 4420 00:26:26.933 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:26.933 traddr: 10.0.0.1 00:26:26.933 eflags: none 00:26:26.933 sectype: none 00:26:26.933 13:36:44 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:26.933 13:36:44 -- host/auth.sh@37 -- # echo 0 00:26:26.933 13:36:44 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:26.933 13:36:44 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:26.933 13:36:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:26.933 13:36:44 -- host/auth.sh@44 -- # digest=sha256 00:26:26.933 13:36:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.933 13:36:44 -- host/auth.sh@44 -- # keyid=1 00:26:26.933 13:36:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:26.933 13:36:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:26.933 13:36:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:26.933 13:36:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:26.933 13:36:44 -- host/auth.sh@100 -- # IFS=, 00:26:26.933 13:36:44 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:26:26.933 13:36:44 -- host/auth.sh@100 -- # IFS=, 00:26:26.933 13:36:44 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.933 13:36:44 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:26.933 13:36:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:26.933 13:36:44 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:26:26.933 13:36:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.933 13:36:44 -- host/auth.sh@68 -- # keyid=1 00:26:26.933 13:36:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:26.933 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.933 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:26.933 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.933 13:36:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:26.933 13:36:44 -- nvmf/common.sh@717 -- # local ip 00:26:26.933 13:36:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:26.933 13:36:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:26.933 13:36:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.933 13:36:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.933 13:36:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:26.933 13:36:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.933 13:36:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:26.933 13:36:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:26.933 13:36:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:26.933 13:36:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:26.933 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.933 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.191 nvme0n1 00:26:27.191 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.191 13:36:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.191 13:36:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.191 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.191 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.191 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.191 13:36:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.191 13:36:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.191 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.191 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.191 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.191 13:36:44 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:27.191 13:36:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.191 13:36:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.191 13:36:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:27.191 13:36:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.191 13:36:44 -- host/auth.sh@44 -- # digest=sha256 00:26:27.191 13:36:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.191 13:36:44 -- host/auth.sh@44 -- # keyid=0 00:26:27.191 13:36:44 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:27.191 13:36:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.191 13:36:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.191 13:36:44 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:27.191 13:36:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:26:27.191 13:36:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.191 13:36:44 -- host/auth.sh@68 -- # digest=sha256 00:26:27.191 13:36:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.191 13:36:44 -- host/auth.sh@68 -- # keyid=0 00:26:27.191 13:36:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.191 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.191 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.191 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.191 13:36:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.191 13:36:44 -- nvmf/common.sh@717 -- # local ip 00:26:27.191 13:36:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.191 13:36:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.191 13:36:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.191 13:36:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.191 13:36:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.191 13:36:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.191 13:36:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.191 13:36:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.191 13:36:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.191 13:36:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:27.191 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.191 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.191 nvme0n1 00:26:27.191 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.191 13:36:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.191 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.191 13:36:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.191 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.191 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.449 13:36:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.449 13:36:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.449 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.449 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.449 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.449 13:36:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.449 13:36:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:27.449 13:36:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.449 13:36:44 -- host/auth.sh@44 -- # digest=sha256 00:26:27.449 13:36:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.449 13:36:44 -- host/auth.sh@44 -- # keyid=1 00:26:27.450 13:36:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:27.450 13:36:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.450 13:36:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.450 13:36:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:27.450 13:36:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:26:27.450 13:36:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.450 13:36:44 -- host/auth.sh@68 -- # digest=sha256 00:26:27.450 13:36:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.450 13:36:44 -- host/auth.sh@68 -- # keyid=1 00:26:27.450 13:36:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.450 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.450 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.450 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.450 13:36:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.450 13:36:44 -- nvmf/common.sh@717 -- # local ip 00:26:27.450 13:36:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.450 13:36:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.450 13:36:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.450 13:36:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.450 13:36:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.450 13:36:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.450 13:36:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.450 13:36:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.450 13:36:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.450 13:36:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:27.450 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.450 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.450 nvme0n1 00:26:27.450 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.450 13:36:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.450 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.450 13:36:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.450 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.450 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.450 13:36:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.450 13:36:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.450 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.450 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.450 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.450 13:36:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.450 13:36:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:27.450 13:36:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.450 13:36:44 -- host/auth.sh@44 -- # digest=sha256 00:26:27.450 13:36:44 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.450 13:36:44 -- host/auth.sh@44 -- # keyid=2 00:26:27.450 13:36:44 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:27.450 13:36:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.450 13:36:44 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.450 13:36:44 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:27.450 13:36:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:26:27.450 13:36:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.450 13:36:44 -- host/auth.sh@68 -- # digest=sha256 00:26:27.450 13:36:44 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.450 13:36:44 -- host/auth.sh@68 -- # keyid=2 00:26:27.450 13:36:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.450 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.450 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.450 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.450 13:36:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.450 13:36:44 -- nvmf/common.sh@717 -- # local ip 00:26:27.450 13:36:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.450 13:36:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.450 13:36:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.450 13:36:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.450 13:36:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.450 13:36:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.450 13:36:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.450 13:36:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.450 13:36:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.450 13:36:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:27.450 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.450 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.709 nvme0n1 00:26:27.709 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.709 13:36:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.709 13:36:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.709 13:36:44 -- common/autotest_common.sh@10 -- # set +x 00:26:27.709 13:36:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.709 13:36:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.709 13:36:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.709 13:36:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.709 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.709 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.709 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.709 13:36:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.709 13:36:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:27.709 13:36:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.709 13:36:45 -- host/auth.sh@44 -- # digest=sha256 00:26:27.709 13:36:45 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.709 13:36:45 -- host/auth.sh@44 -- # keyid=3 00:26:27.709 13:36:45 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:27.709 13:36:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.709 13:36:45 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.709 13:36:45 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:27.709 13:36:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:26:27.709 13:36:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.709 13:36:45 -- host/auth.sh@68 -- # digest=sha256 00:26:27.709 13:36:45 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.709 13:36:45 -- host/auth.sh@68 -- # keyid=3 00:26:27.709 13:36:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.709 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.709 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.709 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.709 13:36:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.709 13:36:45 -- nvmf/common.sh@717 -- # local ip 00:26:27.709 13:36:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.709 13:36:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.709 13:36:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.709 13:36:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.709 13:36:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.709 13:36:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.709 13:36:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.709 13:36:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.709 13:36:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.709 13:36:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:27.709 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.709 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.709 nvme0n1 00:26:27.709 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.709 13:36:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.709 13:36:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.709 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.709 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.968 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.968 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.968 13:36:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:27.968 13:36:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.968 13:36:45 -- host/auth.sh@44 -- # digest=sha256 00:26:27.968 13:36:45 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.968 13:36:45 -- host/auth.sh@44 -- # keyid=4 00:26:27.968 13:36:45 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:27.968 13:36:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.968 13:36:45 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:27.968 13:36:45 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:27.968 13:36:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:26:27.968 13:36:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:27.968 13:36:45 -- host/auth.sh@68 -- # digest=sha256 00:26:27.968 13:36:45 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:27.968 13:36:45 -- host/auth.sh@68 -- # keyid=4 00:26:27.968 13:36:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:27.968 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.968 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:27.968 13:36:45 -- nvmf/common.sh@717 -- # local ip 00:26:27.968 13:36:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:27.968 13:36:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:27.968 13:36:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.968 13:36:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.968 13:36:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:27.968 13:36:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.968 13:36:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:27.968 13:36:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:27.968 13:36:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:27.968 13:36:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.968 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.968 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 nvme0n1 00:26:27.968 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.968 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.968 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 13:36:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:27.968 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.968 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:27.968 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:27.968 13:36:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.968 13:36:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:27.968 13:36:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:27.968 13:36:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:27.968 13:36:45 -- host/auth.sh@44 -- # digest=sha256 00:26:27.968 13:36:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.968 13:36:45 -- host/auth.sh@44 -- # keyid=0 00:26:27.969 13:36:45 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:27.969 13:36:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:27.969 13:36:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:28.535 13:36:45 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:28.535 13:36:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:26:28.535 13:36:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.535 13:36:45 -- host/auth.sh@68 -- # digest=sha256 00:26:28.535 13:36:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:28.535 13:36:45 -- host/auth.sh@68 -- # keyid=0 00:26:28.535 13:36:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.535 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.535 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.535 13:36:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.535 13:36:45 -- nvmf/common.sh@717 -- # local ip 00:26:28.535 13:36:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.535 13:36:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.535 13:36:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.535 13:36:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.535 13:36:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.535 13:36:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.535 13:36:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.535 13:36:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.535 13:36:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.535 13:36:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:28.535 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.535 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 nvme0n1 00:26:28.535 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.535 13:36:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.535 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.535 13:36:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.535 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.535 13:36:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.535 13:36:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.535 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.535 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.535 13:36:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:28.535 13:36:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:28.535 13:36:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:28.535 13:36:45 -- host/auth.sh@44 -- # digest=sha256 00:26:28.535 13:36:45 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.535 13:36:45 -- host/auth.sh@44 -- # keyid=1 00:26:28.535 13:36:45 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:28.535 13:36:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:28.535 13:36:45 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:28.535 13:36:45 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:28.535 13:36:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:26:28.535 13:36:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.535 13:36:45 -- host/auth.sh@68 -- # digest=sha256 00:26:28.535 13:36:45 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:28.535 13:36:45 -- host/auth.sh@68 -- # keyid=1 00:26:28.535 13:36:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.535 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.535 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:28.535 13:36:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.535 13:36:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.535 13:36:45 -- nvmf/common.sh@717 -- # local ip 00:26:28.535 13:36:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.535 13:36:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.535 13:36:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.535 13:36:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.535 13:36:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.535 13:36:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.535 13:36:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.535 13:36:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.535 13:36:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.535 13:36:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:28.535 13:36:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.535 13:36:45 -- common/autotest_common.sh@10 -- # set +x 00:26:28.793 nvme0n1 00:26:28.793 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.793 13:36:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.793 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.793 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.793 13:36:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:28.793 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.793 13:36:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.793 13:36:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.793 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.793 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.793 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.793 13:36:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:28.793 13:36:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:28.793 13:36:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:28.793 13:36:46 -- host/auth.sh@44 -- # digest=sha256 00:26:28.793 13:36:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.793 13:36:46 -- host/auth.sh@44 -- # keyid=2 00:26:28.793 13:36:46 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:28.793 13:36:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:28.793 13:36:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:28.793 13:36:46 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:28.793 13:36:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:26:28.793 13:36:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:28.793 13:36:46 -- host/auth.sh@68 -- # digest=sha256 00:26:28.793 13:36:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:28.793 13:36:46 -- host/auth.sh@68 -- # keyid=2 00:26:28.793 13:36:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:28.793 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.793 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:28.793 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:28.793 13:36:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:28.793 13:36:46 -- nvmf/common.sh@717 -- # local ip 00:26:28.793 13:36:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:28.793 13:36:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:28.793 13:36:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.793 13:36:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.793 13:36:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:28.793 13:36:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.793 13:36:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:28.793 13:36:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:28.793 13:36:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:28.793 13:36:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:28.793 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:28.793 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.052 nvme0n1 00:26:29.052 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.052 13:36:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:29.052 13:36:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.052 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.052 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.052 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.052 13:36:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.052 13:36:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.052 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.052 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.052 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.052 13:36:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:29.052 13:36:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:29.052 13:36:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:29.052 13:36:46 -- host/auth.sh@44 -- # digest=sha256 00:26:29.052 13:36:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.052 13:36:46 -- host/auth.sh@44 -- # keyid=3 00:26:29.052 13:36:46 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:29.052 13:36:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:29.052 13:36:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:29.052 13:36:46 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:29.052 13:36:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:26:29.052 13:36:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:29.052 13:36:46 -- host/auth.sh@68 -- # digest=sha256 00:26:29.052 13:36:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:29.052 13:36:46 -- host/auth.sh@68 -- # keyid=3 00:26:29.052 13:36:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:29.052 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.052 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.052 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.052 13:36:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:29.052 13:36:46 -- nvmf/common.sh@717 -- # local ip 00:26:29.052 13:36:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:29.052 13:36:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:29.052 13:36:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.052 13:36:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.052 13:36:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:29.052 13:36:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.052 13:36:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:29.052 13:36:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:29.052 13:36:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:29.052 13:36:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:29.052 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.052 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.311 nvme0n1 00:26:29.311 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.311 13:36:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.311 13:36:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:29.311 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.311 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.311 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.311 13:36:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.311 13:36:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.311 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.311 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.311 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.311 13:36:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:29.311 13:36:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:29.311 13:36:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:29.311 13:36:46 -- host/auth.sh@44 -- # digest=sha256 00:26:29.311 13:36:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.311 13:36:46 -- host/auth.sh@44 -- # keyid=4 00:26:29.311 13:36:46 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:29.311 13:36:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:29.311 13:36:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:29.311 13:36:46 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:29.311 13:36:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:26:29.311 13:36:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:29.311 13:36:46 -- host/auth.sh@68 -- # digest=sha256 00:26:29.311 13:36:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:29.311 13:36:46 -- host/auth.sh@68 -- # keyid=4 00:26:29.312 13:36:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:29.312 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.312 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.312 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.312 13:36:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:29.312 13:36:46 -- nvmf/common.sh@717 -- # local ip 00:26:29.312 13:36:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:29.312 13:36:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:29.312 13:36:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.312 13:36:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.312 13:36:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:29.312 13:36:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.312 13:36:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:29.312 13:36:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:29.312 13:36:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:29.312 13:36:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.312 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.312 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.312 nvme0n1 00:26:29.312 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.312 13:36:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.312 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.312 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.312 13:36:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:29.312 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.570 13:36:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.570 13:36:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.570 13:36:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.570 13:36:46 -- common/autotest_common.sh@10 -- # set +x 00:26:29.570 13:36:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.570 13:36:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.570 13:36:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:29.570 13:36:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:29.570 13:36:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:29.570 13:36:46 -- host/auth.sh@44 -- # digest=sha256 00:26:29.570 13:36:46 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.570 13:36:46 -- host/auth.sh@44 -- # keyid=0 00:26:29.570 13:36:46 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:29.570 13:36:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:29.570 13:36:46 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:30.137 13:36:47 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:30.137 13:36:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:26:30.137 13:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:30.137 13:36:47 -- host/auth.sh@68 -- # digest=sha256 00:26:30.137 13:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:30.137 13:36:47 -- host/auth.sh@68 -- # keyid=0 00:26:30.137 13:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:30.137 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.137 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.137 13:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:30.137 13:36:47 -- nvmf/common.sh@717 -- # local ip 00:26:30.137 13:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:30.137 13:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:30.137 13:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.137 13:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.137 13:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:30.137 13:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.137 13:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:30.137 13:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:30.137 13:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:30.137 13:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:30.137 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.137 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 nvme0n1 00:26:30.137 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.137 13:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.137 13:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:30.137 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.137 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.395 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.396 13:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.396 13:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.396 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.396 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.396 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.396 13:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:30.396 13:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:30.396 13:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:30.396 13:36:47 -- host/auth.sh@44 -- # digest=sha256 00:26:30.396 13:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.396 13:36:47 -- host/auth.sh@44 -- # keyid=1 00:26:30.396 13:36:47 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:30.396 13:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:30.396 13:36:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:30.396 13:36:47 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:30.396 13:36:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:26:30.396 13:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:30.396 13:36:47 -- host/auth.sh@68 -- # digest=sha256 00:26:30.396 13:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:30.396 13:36:47 -- host/auth.sh@68 -- # keyid=1 00:26:30.396 13:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:30.396 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.396 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.396 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.396 13:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:30.396 13:36:47 -- nvmf/common.sh@717 -- # local ip 00:26:30.396 13:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:30.396 13:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:30.396 13:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.396 13:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.396 13:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:30.396 13:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.396 13:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:30.396 13:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:30.396 13:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:30.396 13:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:30.396 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.396 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.654 nvme0n1 00:26:30.654 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.654 13:36:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.654 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.654 13:36:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:30.654 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.654 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.654 13:36:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.654 13:36:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.654 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.654 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.654 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.654 13:36:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:30.654 13:36:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:30.654 13:36:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:30.654 13:36:47 -- host/auth.sh@44 -- # digest=sha256 00:26:30.654 13:36:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.654 13:36:47 -- host/auth.sh@44 -- # keyid=2 00:26:30.654 13:36:47 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:30.654 13:36:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:30.654 13:36:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:30.654 13:36:47 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:30.654 13:36:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:26:30.654 13:36:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:30.654 13:36:47 -- host/auth.sh@68 -- # digest=sha256 00:26:30.654 13:36:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:30.654 13:36:47 -- host/auth.sh@68 -- # keyid=2 00:26:30.654 13:36:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:30.654 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.654 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.654 13:36:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.654 13:36:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:30.654 13:36:47 -- nvmf/common.sh@717 -- # local ip 00:26:30.654 13:36:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:30.654 13:36:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:30.654 13:36:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.654 13:36:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.654 13:36:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:30.654 13:36:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.654 13:36:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:30.654 13:36:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:30.654 13:36:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:30.654 13:36:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:30.654 13:36:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.654 13:36:47 -- common/autotest_common.sh@10 -- # set +x 00:26:30.912 nvme0n1 00:26:30.912 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.912 13:36:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.912 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.912 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:30.912 13:36:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:30.912 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.912 13:36:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.912 13:36:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.912 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.912 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:30.912 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.912 13:36:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:30.912 13:36:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:30.912 13:36:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:30.912 13:36:48 -- host/auth.sh@44 -- # digest=sha256 00:26:30.912 13:36:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.912 13:36:48 -- host/auth.sh@44 -- # keyid=3 00:26:30.912 13:36:48 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:30.912 13:36:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:30.912 13:36:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:30.912 13:36:48 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:30.912 13:36:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:26:30.912 13:36:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:30.912 13:36:48 -- host/auth.sh@68 -- # digest=sha256 00:26:30.913 13:36:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:30.913 13:36:48 -- host/auth.sh@68 -- # keyid=3 00:26:30.913 13:36:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:30.913 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.913 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:30.913 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.913 13:36:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:30.913 13:36:48 -- nvmf/common.sh@717 -- # local ip 00:26:30.913 13:36:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:30.913 13:36:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:30.913 13:36:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.913 13:36:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.913 13:36:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:30.913 13:36:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.913 13:36:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:30.913 13:36:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:30.913 13:36:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:30.913 13:36:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:30.913 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.913 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:31.170 nvme0n1 00:26:31.170 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.170 13:36:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.170 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.170 13:36:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:31.170 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:31.170 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.170 13:36:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.170 13:36:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.170 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.170 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:31.170 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.170 13:36:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:31.170 13:36:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:31.170 13:36:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:31.170 13:36:48 -- host/auth.sh@44 -- # digest=sha256 00:26:31.170 13:36:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.170 13:36:48 -- host/auth.sh@44 -- # keyid=4 00:26:31.170 13:36:48 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:31.170 13:36:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:31.170 13:36:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:31.170 13:36:48 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:31.170 13:36:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:26:31.170 13:36:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:31.171 13:36:48 -- host/auth.sh@68 -- # digest=sha256 00:26:31.171 13:36:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:31.171 13:36:48 -- host/auth.sh@68 -- # keyid=4 00:26:31.171 13:36:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:31.171 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.171 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:31.171 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.171 13:36:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:31.171 13:36:48 -- nvmf/common.sh@717 -- # local ip 00:26:31.171 13:36:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:31.171 13:36:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:31.171 13:36:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.171 13:36:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.171 13:36:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:31.171 13:36:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.171 13:36:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:31.171 13:36:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:31.171 13:36:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:31.171 13:36:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.171 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.171 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:31.428 nvme0n1 00:26:31.428 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.428 13:36:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.428 13:36:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:31.428 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.428 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:31.428 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.428 13:36:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.428 13:36:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.428 13:36:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:31.428 13:36:48 -- common/autotest_common.sh@10 -- # set +x 00:26:31.428 13:36:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:31.428 13:36:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.428 13:36:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:31.428 13:36:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:31.428 13:36:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:31.428 13:36:48 -- host/auth.sh@44 -- # digest=sha256 00:26:31.428 13:36:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.428 13:36:48 -- host/auth.sh@44 -- # keyid=0 00:26:31.428 13:36:48 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:31.428 13:36:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:31.428 13:36:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:33.328 13:36:50 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:33.328 13:36:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:26:33.328 13:36:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:33.328 13:36:50 -- host/auth.sh@68 -- # digest=sha256 00:26:33.328 13:36:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:33.328 13:36:50 -- host/auth.sh@68 -- # keyid=0 00:26:33.328 13:36:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:33.328 13:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.328 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:26:33.328 13:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.328 13:36:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:33.328 13:36:50 -- nvmf/common.sh@717 -- # local ip 00:26:33.328 13:36:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:33.328 13:36:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:33.328 13:36:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.328 13:36:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.328 13:36:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:33.328 13:36:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.328 13:36:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:33.328 13:36:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:33.328 13:36:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:33.328 13:36:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:33.328 13:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.328 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:26:33.586 nvme0n1 00:26:33.586 13:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.586 13:36:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:33.586 13:36:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.586 13:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.586 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:26:33.586 13:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.586 13:36:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.586 13:36:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.586 13:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.586 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:26:33.586 13:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.586 13:36:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:33.586 13:36:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:33.586 13:36:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:33.586 13:36:50 -- host/auth.sh@44 -- # digest=sha256 00:26:33.586 13:36:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.586 13:36:50 -- host/auth.sh@44 -- # keyid=1 00:26:33.586 13:36:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:33.586 13:36:50 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:33.586 13:36:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:33.586 13:36:50 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:33.586 13:36:50 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:26:33.586 13:36:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:33.586 13:36:50 -- host/auth.sh@68 -- # digest=sha256 00:26:33.586 13:36:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:33.586 13:36:50 -- host/auth.sh@68 -- # keyid=1 00:26:33.586 13:36:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:33.586 13:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.586 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:26:33.586 13:36:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.586 13:36:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:33.586 13:36:50 -- nvmf/common.sh@717 -- # local ip 00:26:33.587 13:36:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:33.587 13:36:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:33.587 13:36:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.587 13:36:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.587 13:36:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:33.587 13:36:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.587 13:36:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:33.587 13:36:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:33.587 13:36:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:33.587 13:36:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:33.587 13:36:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.587 13:36:50 -- common/autotest_common.sh@10 -- # set +x 00:26:33.846 nvme0n1 00:26:33.846 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:33.846 13:36:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.846 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:33.846 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:33.846 13:36:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:33.846 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.104 13:36:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.104 13:36:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.104 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.104 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.104 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.104 13:36:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:34.104 13:36:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:34.104 13:36:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:34.104 13:36:51 -- host/auth.sh@44 -- # digest=sha256 00:26:34.104 13:36:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.104 13:36:51 -- host/auth.sh@44 -- # keyid=2 00:26:34.104 13:36:51 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:34.104 13:36:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:34.104 13:36:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:34.104 13:36:51 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:34.104 13:36:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:26:34.104 13:36:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:34.104 13:36:51 -- host/auth.sh@68 -- # digest=sha256 00:26:34.104 13:36:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:34.104 13:36:51 -- host/auth.sh@68 -- # keyid=2 00:26:34.104 13:36:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:34.104 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.104 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.104 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.104 13:36:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:34.104 13:36:51 -- nvmf/common.sh@717 -- # local ip 00:26:34.104 13:36:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:34.104 13:36:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:34.104 13:36:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.104 13:36:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.104 13:36:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:34.104 13:36:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.104 13:36:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:34.104 13:36:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:34.104 13:36:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:34.104 13:36:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:34.104 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.104 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.363 nvme0n1 00:26:34.363 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.363 13:36:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.363 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.363 13:36:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:34.363 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.363 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.363 13:36:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.363 13:36:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.363 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.363 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.363 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.363 13:36:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:34.363 13:36:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:34.363 13:36:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:34.363 13:36:51 -- host/auth.sh@44 -- # digest=sha256 00:26:34.363 13:36:51 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.363 13:36:51 -- host/auth.sh@44 -- # keyid=3 00:26:34.363 13:36:51 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:34.363 13:36:51 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:34.363 13:36:51 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:34.363 13:36:51 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:34.363 13:36:51 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:26:34.363 13:36:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:34.363 13:36:51 -- host/auth.sh@68 -- # digest=sha256 00:26:34.363 13:36:51 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:34.363 13:36:51 -- host/auth.sh@68 -- # keyid=3 00:26:34.363 13:36:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:34.363 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.363 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.363 13:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.363 13:36:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:34.363 13:36:51 -- nvmf/common.sh@717 -- # local ip 00:26:34.363 13:36:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:34.363 13:36:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:34.363 13:36:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.363 13:36:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.363 13:36:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:34.363 13:36:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.363 13:36:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:34.363 13:36:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:34.363 13:36:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:34.363 13:36:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:34.363 13:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.363 13:36:51 -- common/autotest_common.sh@10 -- # set +x 00:26:34.928 nvme0n1 00:26:34.928 13:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.928 13:36:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.928 13:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.928 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.928 13:36:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:34.928 13:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.928 13:36:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.928 13:36:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.929 13:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.929 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.929 13:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.929 13:36:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:34.929 13:36:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:34.929 13:36:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:34.929 13:36:52 -- host/auth.sh@44 -- # digest=sha256 00:26:34.929 13:36:52 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.929 13:36:52 -- host/auth.sh@44 -- # keyid=4 00:26:34.929 13:36:52 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:34.929 13:36:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:34.929 13:36:52 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:34.929 13:36:52 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:34.929 13:36:52 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:26:34.929 13:36:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:34.929 13:36:52 -- host/auth.sh@68 -- # digest=sha256 00:26:34.929 13:36:52 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:34.929 13:36:52 -- host/auth.sh@68 -- # keyid=4 00:26:34.929 13:36:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:34.929 13:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.929 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:34.929 13:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.929 13:36:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:34.929 13:36:52 -- nvmf/common.sh@717 -- # local ip 00:26:34.929 13:36:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:34.929 13:36:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:34.929 13:36:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.929 13:36:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.929 13:36:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:34.929 13:36:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.929 13:36:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:34.929 13:36:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:34.929 13:36:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:34.929 13:36:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:34.929 13:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.929 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:35.186 nvme0n1 00:26:35.186 13:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.186 13:36:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.186 13:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.186 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:35.186 13:36:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:35.186 13:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.186 13:36:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.186 13:36:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.186 13:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:35.186 13:36:52 -- common/autotest_common.sh@10 -- # set +x 00:26:35.186 13:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:35.186 13:36:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.186 13:36:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:35.186 13:36:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:35.186 13:36:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:35.186 13:36:52 -- host/auth.sh@44 -- # digest=sha256 00:26:35.186 13:36:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.186 13:36:52 -- host/auth.sh@44 -- # keyid=0 00:26:35.187 13:36:52 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:35.187 13:36:52 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:35.187 13:36:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:39.378 13:36:56 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:39.378 13:36:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:26:39.378 13:36:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:39.378 13:36:56 -- host/auth.sh@68 -- # digest=sha256 00:26:39.378 13:36:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:39.378 13:36:56 -- host/auth.sh@68 -- # keyid=0 00:26:39.378 13:36:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.378 13:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.378 13:36:56 -- common/autotest_common.sh@10 -- # set +x 00:26:39.378 13:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.378 13:36:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:39.378 13:36:56 -- nvmf/common.sh@717 -- # local ip 00:26:39.378 13:36:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:39.378 13:36:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:39.378 13:36:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.378 13:36:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.378 13:36:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:39.378 13:36:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.378 13:36:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:39.378 13:36:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:39.378 13:36:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:39.378 13:36:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:39.378 13:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.378 13:36:56 -- common/autotest_common.sh@10 -- # set +x 00:26:39.644 nvme0n1 00:26:39.644 13:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.644 13:36:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.644 13:36:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:39.644 13:36:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.644 13:36:56 -- common/autotest_common.sh@10 -- # set +x 00:26:39.644 13:36:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.644 13:36:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.644 13:36:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.644 13:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.644 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:39.644 13:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.644 13:36:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:39.644 13:36:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:39.644 13:36:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:39.644 13:36:57 -- host/auth.sh@44 -- # digest=sha256 00:26:39.644 13:36:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.644 13:36:57 -- host/auth.sh@44 -- # keyid=1 00:26:39.644 13:36:57 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:39.644 13:36:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:39.644 13:36:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:39.644 13:36:57 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:39.644 13:36:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:26:39.644 13:36:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:39.644 13:36:57 -- host/auth.sh@68 -- # digest=sha256 00:26:39.644 13:36:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:39.645 13:36:57 -- host/auth.sh@68 -- # keyid=1 00:26:39.645 13:36:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.645 13:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.645 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:39.645 13:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:39.645 13:36:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:39.645 13:36:57 -- nvmf/common.sh@717 -- # local ip 00:26:39.645 13:36:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:39.645 13:36:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:39.645 13:36:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.645 13:36:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.645 13:36:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:39.645 13:36:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.645 13:36:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:39.645 13:36:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:39.645 13:36:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:39.645 13:36:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:39.645 13:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:39.645 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 nvme0n1 00:26:40.234 13:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.234 13:36:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.234 13:36:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:40.234 13:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.234 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 13:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.493 13:36:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.493 13:36:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.493 13:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.493 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:40.493 13:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.493 13:36:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:40.493 13:36:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:40.493 13:36:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:40.493 13:36:57 -- host/auth.sh@44 -- # digest=sha256 00:26:40.493 13:36:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.493 13:36:57 -- host/auth.sh@44 -- # keyid=2 00:26:40.493 13:36:57 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:40.493 13:36:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:40.493 13:36:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:40.493 13:36:57 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:40.493 13:36:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:26:40.493 13:36:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:40.493 13:36:57 -- host/auth.sh@68 -- # digest=sha256 00:26:40.493 13:36:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:40.493 13:36:57 -- host/auth.sh@68 -- # keyid=2 00:26:40.493 13:36:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:40.493 13:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.493 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:40.493 13:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:40.493 13:36:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:40.493 13:36:57 -- nvmf/common.sh@717 -- # local ip 00:26:40.493 13:36:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:40.493 13:36:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:40.493 13:36:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.493 13:36:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.493 13:36:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:40.493 13:36:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.493 13:36:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:40.493 13:36:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:40.493 13:36:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:40.493 13:36:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:40.493 13:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:40.493 13:36:57 -- common/autotest_common.sh@10 -- # set +x 00:26:41.059 nvme0n1 00:26:41.059 13:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.059 13:36:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.059 13:36:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:41.059 13:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.059 13:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:41.059 13:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.059 13:36:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.059 13:36:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.059 13:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.059 13:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:41.059 13:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.059 13:36:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:41.059 13:36:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:41.059 13:36:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:41.059 13:36:58 -- host/auth.sh@44 -- # digest=sha256 00:26:41.059 13:36:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.059 13:36:58 -- host/auth.sh@44 -- # keyid=3 00:26:41.060 13:36:58 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:41.060 13:36:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:41.060 13:36:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:41.060 13:36:58 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:41.060 13:36:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:26:41.060 13:36:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:41.060 13:36:58 -- host/auth.sh@68 -- # digest=sha256 00:26:41.060 13:36:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:41.060 13:36:58 -- host/auth.sh@68 -- # keyid=3 00:26:41.060 13:36:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.060 13:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.060 13:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:41.060 13:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.060 13:36:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:41.060 13:36:58 -- nvmf/common.sh@717 -- # local ip 00:26:41.060 13:36:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:41.060 13:36:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:41.060 13:36:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.060 13:36:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.060 13:36:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:41.060 13:36:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.060 13:36:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:41.060 13:36:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:41.060 13:36:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:41.060 13:36:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:41.060 13:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.060 13:36:58 -- common/autotest_common.sh@10 -- # set +x 00:26:41.626 nvme0n1 00:26:41.626 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.626 13:36:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:41.626 13:36:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.626 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.626 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:41.626 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.885 13:36:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.885 13:36:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.885 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.885 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:41.885 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.885 13:36:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:41.885 13:36:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:41.885 13:36:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:41.885 13:36:59 -- host/auth.sh@44 -- # digest=sha256 00:26:41.885 13:36:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.885 13:36:59 -- host/auth.sh@44 -- # keyid=4 00:26:41.885 13:36:59 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:41.885 13:36:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:26:41.885 13:36:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:41.885 13:36:59 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:41.885 13:36:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:26:41.885 13:36:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:41.885 13:36:59 -- host/auth.sh@68 -- # digest=sha256 00:26:41.885 13:36:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:41.885 13:36:59 -- host/auth.sh@68 -- # keyid=4 00:26:41.885 13:36:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.885 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.885 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:41.885 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.885 13:36:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:41.885 13:36:59 -- nvmf/common.sh@717 -- # local ip 00:26:41.885 13:36:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:41.885 13:36:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:41.885 13:36:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.885 13:36:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.885 13:36:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:41.885 13:36:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.885 13:36:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:41.885 13:36:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:41.885 13:36:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:41.885 13:36:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.885 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.885 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:42.452 nvme0n1 00:26:42.452 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.452 13:36:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.452 13:36:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:42.452 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.452 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:42.452 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.452 13:36:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.452 13:36:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.452 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.452 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:42.452 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.452 13:36:59 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:42.452 13:36:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.452 13:36:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:42.452 13:36:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:42.452 13:36:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:42.452 13:36:59 -- host/auth.sh@44 -- # digest=sha384 00:26:42.452 13:36:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.452 13:36:59 -- host/auth.sh@44 -- # keyid=0 00:26:42.452 13:36:59 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:42.452 13:36:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:42.452 13:36:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:42.452 13:36:59 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:42.452 13:36:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:26:42.452 13:36:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:42.452 13:36:59 -- host/auth.sh@68 -- # digest=sha384 00:26:42.452 13:36:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:42.452 13:36:59 -- host/auth.sh@68 -- # keyid=0 00:26:42.452 13:36:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.452 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.452 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:42.452 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.452 13:36:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:42.452 13:36:59 -- nvmf/common.sh@717 -- # local ip 00:26:42.452 13:36:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:42.452 13:36:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:42.452 13:36:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.452 13:36:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.452 13:36:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:42.452 13:36:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.452 13:36:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:42.452 13:36:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:42.452 13:36:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:42.452 13:36:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:42.452 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.452 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:42.711 nvme0n1 00:26:42.711 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.711 13:36:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.711 13:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.711 13:36:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:42.711 13:36:59 -- common/autotest_common.sh@10 -- # set +x 00:26:42.711 13:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.711 13:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.711 13:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.711 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.711 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.711 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.711 13:37:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:42.711 13:37:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:42.711 13:37:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:42.711 13:37:00 -- host/auth.sh@44 -- # digest=sha384 00:26:42.711 13:37:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.711 13:37:00 -- host/auth.sh@44 -- # keyid=1 00:26:42.711 13:37:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:42.711 13:37:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:42.711 13:37:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:42.711 13:37:00 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:42.711 13:37:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:26:42.711 13:37:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:42.711 13:37:00 -- host/auth.sh@68 -- # digest=sha384 00:26:42.711 13:37:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:42.711 13:37:00 -- host/auth.sh@68 -- # keyid=1 00:26:42.711 13:37:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.711 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.711 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.711 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.711 13:37:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:42.711 13:37:00 -- nvmf/common.sh@717 -- # local ip 00:26:42.711 13:37:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:42.711 13:37:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:42.711 13:37:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.711 13:37:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.711 13:37:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:42.711 13:37:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.711 13:37:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:42.711 13:37:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:42.711 13:37:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:42.711 13:37:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:42.711 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.711 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.711 nvme0n1 00:26:42.711 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.711 13:37:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.711 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.711 13:37:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:42.711 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.970 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.970 13:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.970 13:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.970 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.970 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.970 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.970 13:37:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:42.970 13:37:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:42.970 13:37:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:42.970 13:37:00 -- host/auth.sh@44 -- # digest=sha384 00:26:42.970 13:37:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.970 13:37:00 -- host/auth.sh@44 -- # keyid=2 00:26:42.970 13:37:00 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:42.970 13:37:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:42.970 13:37:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:42.970 13:37:00 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:42.970 13:37:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:26:42.970 13:37:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:42.970 13:37:00 -- host/auth.sh@68 -- # digest=sha384 00:26:42.970 13:37:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:42.970 13:37:00 -- host/auth.sh@68 -- # keyid=2 00:26:42.970 13:37:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:42.970 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.970 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.970 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.970 13:37:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:42.970 13:37:00 -- nvmf/common.sh@717 -- # local ip 00:26:42.970 13:37:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:42.970 13:37:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:42.970 13:37:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.970 13:37:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.970 13:37:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:42.970 13:37:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.970 13:37:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:42.970 13:37:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:42.970 13:37:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:42.970 13:37:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:42.970 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.970 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.970 nvme0n1 00:26:42.970 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.970 13:37:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.970 13:37:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:42.970 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.970 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.970 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.970 13:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.970 13:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.970 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.970 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:42.970 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.971 13:37:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:42.971 13:37:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:42.971 13:37:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:42.971 13:37:00 -- host/auth.sh@44 -- # digest=sha384 00:26:42.971 13:37:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:42.971 13:37:00 -- host/auth.sh@44 -- # keyid=3 00:26:42.971 13:37:00 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:42.971 13:37:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:42.971 13:37:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:42.971 13:37:00 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:42.971 13:37:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:26:42.971 13:37:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:42.971 13:37:00 -- host/auth.sh@68 -- # digest=sha384 00:26:42.971 13:37:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:42.971 13:37:00 -- host/auth.sh@68 -- # keyid=3 00:26:42.971 13:37:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.231 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.231 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.231 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.231 13:37:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:43.231 13:37:00 -- nvmf/common.sh@717 -- # local ip 00:26:43.231 13:37:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:43.231 13:37:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:43.231 13:37:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.231 13:37:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.231 13:37:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:43.231 13:37:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.231 13:37:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:43.231 13:37:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:43.231 13:37:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:43.231 13:37:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:43.231 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.231 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.231 nvme0n1 00:26:43.231 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.231 13:37:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.231 13:37:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:43.231 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.231 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.231 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.231 13:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.231 13:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.231 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.231 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.231 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.231 13:37:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:43.231 13:37:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:43.231 13:37:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:43.231 13:37:00 -- host/auth.sh@44 -- # digest=sha384 00:26:43.231 13:37:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.231 13:37:00 -- host/auth.sh@44 -- # keyid=4 00:26:43.231 13:37:00 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:43.231 13:37:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:43.231 13:37:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:43.231 13:37:00 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:43.231 13:37:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:26:43.231 13:37:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:43.231 13:37:00 -- host/auth.sh@68 -- # digest=sha384 00:26:43.231 13:37:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:43.231 13:37:00 -- host/auth.sh@68 -- # keyid=4 00:26:43.232 13:37:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.232 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.232 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.232 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.232 13:37:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:43.232 13:37:00 -- nvmf/common.sh@717 -- # local ip 00:26:43.232 13:37:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:43.232 13:37:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:43.232 13:37:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.232 13:37:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.232 13:37:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:43.232 13:37:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.232 13:37:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:43.232 13:37:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:43.232 13:37:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:43.232 13:37:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.232 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.232 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.491 nvme0n1 00:26:43.491 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.491 13:37:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.491 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.491 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.491 13:37:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:43.491 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.491 13:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.491 13:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.491 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.491 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.491 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.491 13:37:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.491 13:37:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:43.491 13:37:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:43.491 13:37:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:43.491 13:37:00 -- host/auth.sh@44 -- # digest=sha384 00:26:43.491 13:37:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.491 13:37:00 -- host/auth.sh@44 -- # keyid=0 00:26:43.491 13:37:00 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:43.491 13:37:00 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:43.491 13:37:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:43.491 13:37:00 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:43.491 13:37:00 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:26:43.491 13:37:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:43.491 13:37:00 -- host/auth.sh@68 -- # digest=sha384 00:26:43.491 13:37:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:43.491 13:37:00 -- host/auth.sh@68 -- # keyid=0 00:26:43.491 13:37:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.491 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.491 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.491 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.491 13:37:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:43.491 13:37:00 -- nvmf/common.sh@717 -- # local ip 00:26:43.491 13:37:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:43.491 13:37:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:43.491 13:37:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.491 13:37:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.491 13:37:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:43.491 13:37:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.491 13:37:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:43.491 13:37:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:43.491 13:37:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:43.491 13:37:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:43.491 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.491 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.751 nvme0n1 00:26:43.751 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.751 13:37:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.751 13:37:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:43.751 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.751 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.751 13:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.751 13:37:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.751 13:37:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.751 13:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.751 13:37:00 -- common/autotest_common.sh@10 -- # set +x 00:26:43.751 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.751 13:37:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:43.751 13:37:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:43.751 13:37:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:43.751 13:37:01 -- host/auth.sh@44 -- # digest=sha384 00:26:43.751 13:37:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:43.751 13:37:01 -- host/auth.sh@44 -- # keyid=1 00:26:43.751 13:37:01 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:43.751 13:37:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:43.751 13:37:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:43.751 13:37:01 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:43.751 13:37:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:26:43.751 13:37:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:43.751 13:37:01 -- host/auth.sh@68 -- # digest=sha384 00:26:43.751 13:37:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:43.751 13:37:01 -- host/auth.sh@68 -- # keyid=1 00:26:43.751 13:37:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:43.751 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.751 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:43.751 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.751 13:37:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:43.751 13:37:01 -- nvmf/common.sh@717 -- # local ip 00:26:43.751 13:37:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:43.751 13:37:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:43.751 13:37:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.751 13:37:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.751 13:37:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:43.751 13:37:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.751 13:37:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:43.751 13:37:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:43.751 13:37:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:43.751 13:37:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:43.751 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.751 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:43.751 nvme0n1 00:26:43.751 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.751 13:37:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.751 13:37:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:43.751 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.751 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:43.751 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.751 13:37:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.751 13:37:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.751 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.751 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.010 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.010 13:37:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:44.010 13:37:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:44.010 13:37:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:44.010 13:37:01 -- host/auth.sh@44 -- # digest=sha384 00:26:44.010 13:37:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.010 13:37:01 -- host/auth.sh@44 -- # keyid=2 00:26:44.010 13:37:01 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:44.010 13:37:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:44.010 13:37:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:44.010 13:37:01 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:44.010 13:37:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:26:44.010 13:37:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:44.010 13:37:01 -- host/auth.sh@68 -- # digest=sha384 00:26:44.010 13:37:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:44.010 13:37:01 -- host/auth.sh@68 -- # keyid=2 00:26:44.011 13:37:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.011 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.011 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.011 13:37:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:44.011 13:37:01 -- nvmf/common.sh@717 -- # local ip 00:26:44.011 13:37:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:44.011 13:37:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:44.011 13:37:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.011 13:37:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.011 13:37:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:44.011 13:37:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.011 13:37:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:44.011 13:37:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:44.011 13:37:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:44.011 13:37:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:44.011 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.011 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 nvme0n1 00:26:44.011 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.011 13:37:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.011 13:37:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:44.011 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.011 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.011 13:37:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.011 13:37:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.011 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.011 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.011 13:37:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:44.011 13:37:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:44.011 13:37:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:44.011 13:37:01 -- host/auth.sh@44 -- # digest=sha384 00:26:44.011 13:37:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.011 13:37:01 -- host/auth.sh@44 -- # keyid=3 00:26:44.011 13:37:01 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:44.011 13:37:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:44.011 13:37:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:44.011 13:37:01 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:44.011 13:37:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:26:44.011 13:37:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:44.011 13:37:01 -- host/auth.sh@68 -- # digest=sha384 00:26:44.011 13:37:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:44.011 13:37:01 -- host/auth.sh@68 -- # keyid=3 00:26:44.011 13:37:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.011 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.011 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.011 13:37:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:44.011 13:37:01 -- nvmf/common.sh@717 -- # local ip 00:26:44.011 13:37:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:44.011 13:37:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:44.011 13:37:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.011 13:37:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.011 13:37:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:44.011 13:37:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.011 13:37:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:44.011 13:37:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:44.011 13:37:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:44.011 13:37:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:44.011 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.011 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.269 nvme0n1 00:26:44.269 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.269 13:37:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.269 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.269 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.269 13:37:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:44.269 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.269 13:37:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.269 13:37:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.269 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.269 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.269 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.269 13:37:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:44.269 13:37:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:44.269 13:37:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:44.269 13:37:01 -- host/auth.sh@44 -- # digest=sha384 00:26:44.269 13:37:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.269 13:37:01 -- host/auth.sh@44 -- # keyid=4 00:26:44.269 13:37:01 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:44.269 13:37:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:44.269 13:37:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:44.269 13:37:01 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:44.269 13:37:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:26:44.269 13:37:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:44.269 13:37:01 -- host/auth.sh@68 -- # digest=sha384 00:26:44.269 13:37:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:44.269 13:37:01 -- host/auth.sh@68 -- # keyid=4 00:26:44.269 13:37:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.269 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.269 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.269 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.269 13:37:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:44.269 13:37:01 -- nvmf/common.sh@717 -- # local ip 00:26:44.269 13:37:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:44.269 13:37:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:44.269 13:37:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.269 13:37:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.269 13:37:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:44.269 13:37:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.269 13:37:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:44.269 13:37:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:44.269 13:37:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:44.269 13:37:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.269 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.269 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.528 nvme0n1 00:26:44.528 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.528 13:37:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.528 13:37:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:44.528 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.528 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.528 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.528 13:37:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.528 13:37:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.528 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.528 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.528 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.528 13:37:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.528 13:37:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:44.528 13:37:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:44.528 13:37:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:44.528 13:37:01 -- host/auth.sh@44 -- # digest=sha384 00:26:44.528 13:37:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.528 13:37:01 -- host/auth.sh@44 -- # keyid=0 00:26:44.528 13:37:01 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:44.528 13:37:01 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:44.528 13:37:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:44.528 13:37:01 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:44.528 13:37:01 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:26:44.528 13:37:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:44.528 13:37:01 -- host/auth.sh@68 -- # digest=sha384 00:26:44.528 13:37:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:44.528 13:37:01 -- host/auth.sh@68 -- # keyid=0 00:26:44.528 13:37:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.528 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.528 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.528 13:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.528 13:37:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:44.528 13:37:01 -- nvmf/common.sh@717 -- # local ip 00:26:44.528 13:37:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:44.528 13:37:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:44.528 13:37:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.528 13:37:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.528 13:37:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:44.528 13:37:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.528 13:37:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:44.528 13:37:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:44.528 13:37:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:44.528 13:37:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:44.528 13:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.528 13:37:01 -- common/autotest_common.sh@10 -- # set +x 00:26:44.786 nvme0n1 00:26:44.786 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.786 13:37:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.786 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.786 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:44.786 13:37:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:44.786 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.786 13:37:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.786 13:37:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.786 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.786 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:44.786 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.786 13:37:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:44.786 13:37:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:44.786 13:37:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:44.786 13:37:02 -- host/auth.sh@44 -- # digest=sha384 00:26:44.786 13:37:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:44.786 13:37:02 -- host/auth.sh@44 -- # keyid=1 00:26:44.786 13:37:02 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:44.786 13:37:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:44.786 13:37:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:44.786 13:37:02 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:44.786 13:37:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:26:44.786 13:37:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:44.786 13:37:02 -- host/auth.sh@68 -- # digest=sha384 00:26:44.786 13:37:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:44.786 13:37:02 -- host/auth.sh@68 -- # keyid=1 00:26:44.786 13:37:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:44.786 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.786 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:44.786 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.786 13:37:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:44.786 13:37:02 -- nvmf/common.sh@717 -- # local ip 00:26:44.786 13:37:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:44.786 13:37:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:44.786 13:37:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.786 13:37:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.786 13:37:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:44.786 13:37:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.786 13:37:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:44.786 13:37:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:44.786 13:37:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:44.787 13:37:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:44.787 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.787 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.045 nvme0n1 00:26:45.045 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.045 13:37:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.045 13:37:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:45.045 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.045 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.045 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.045 13:37:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.045 13:37:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.045 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.045 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.045 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.045 13:37:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:45.045 13:37:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:45.045 13:37:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:45.045 13:37:02 -- host/auth.sh@44 -- # digest=sha384 00:26:45.045 13:37:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.045 13:37:02 -- host/auth.sh@44 -- # keyid=2 00:26:45.045 13:37:02 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:45.045 13:37:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:45.045 13:37:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:45.045 13:37:02 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:45.046 13:37:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:26:45.046 13:37:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:45.046 13:37:02 -- host/auth.sh@68 -- # digest=sha384 00:26:45.046 13:37:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:45.046 13:37:02 -- host/auth.sh@68 -- # keyid=2 00:26:45.046 13:37:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.046 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.046 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.046 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.046 13:37:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:45.046 13:37:02 -- nvmf/common.sh@717 -- # local ip 00:26:45.046 13:37:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:45.046 13:37:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:45.046 13:37:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.046 13:37:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.046 13:37:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:45.046 13:37:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.046 13:37:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:45.046 13:37:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:45.046 13:37:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:45.046 13:37:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:45.046 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.046 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.304 nvme0n1 00:26:45.304 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.304 13:37:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.304 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.304 13:37:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:45.304 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.304 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.304 13:37:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.304 13:37:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.304 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.304 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.304 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.304 13:37:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:45.304 13:37:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:45.304 13:37:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:45.304 13:37:02 -- host/auth.sh@44 -- # digest=sha384 00:26:45.304 13:37:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.304 13:37:02 -- host/auth.sh@44 -- # keyid=3 00:26:45.304 13:37:02 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:45.304 13:37:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:45.304 13:37:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:45.304 13:37:02 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:45.304 13:37:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:26:45.304 13:37:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:45.304 13:37:02 -- host/auth.sh@68 -- # digest=sha384 00:26:45.304 13:37:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:45.304 13:37:02 -- host/auth.sh@68 -- # keyid=3 00:26:45.304 13:37:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.304 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.305 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.305 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.305 13:37:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:45.305 13:37:02 -- nvmf/common.sh@717 -- # local ip 00:26:45.305 13:37:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:45.305 13:37:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:45.305 13:37:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.305 13:37:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.305 13:37:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:45.305 13:37:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.305 13:37:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:45.305 13:37:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:45.305 13:37:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:45.305 13:37:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:45.305 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.305 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.563 nvme0n1 00:26:45.563 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.563 13:37:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.563 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.563 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.563 13:37:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:45.563 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.563 13:37:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.563 13:37:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.563 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.563 13:37:02 -- common/autotest_common.sh@10 -- # set +x 00:26:45.563 13:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.563 13:37:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:45.563 13:37:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:45.563 13:37:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:45.563 13:37:02 -- host/auth.sh@44 -- # digest=sha384 00:26:45.563 13:37:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.563 13:37:02 -- host/auth.sh@44 -- # keyid=4 00:26:45.563 13:37:02 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:45.563 13:37:02 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:45.563 13:37:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:45.563 13:37:02 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:45.563 13:37:02 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:26:45.563 13:37:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:45.563 13:37:02 -- host/auth.sh@68 -- # digest=sha384 00:26:45.563 13:37:02 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:45.563 13:37:02 -- host/auth.sh@68 -- # keyid=4 00:26:45.563 13:37:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.563 13:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.563 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.563 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.563 13:37:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:45.563 13:37:03 -- nvmf/common.sh@717 -- # local ip 00:26:45.563 13:37:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:45.563 13:37:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:45.563 13:37:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.822 13:37:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.822 13:37:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:45.822 13:37:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.822 13:37:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:45.822 13:37:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:45.822 13:37:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:45.822 13:37:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.822 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.822 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.822 nvme0n1 00:26:45.822 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.822 13:37:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.822 13:37:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:45.822 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.822 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:45.822 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.822 13:37:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.822 13:37:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.822 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.822 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.081 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.081 13:37:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.081 13:37:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:46.081 13:37:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:46.081 13:37:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:46.081 13:37:03 -- host/auth.sh@44 -- # digest=sha384 00:26:46.081 13:37:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.081 13:37:03 -- host/auth.sh@44 -- # keyid=0 00:26:46.081 13:37:03 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:46.081 13:37:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:46.081 13:37:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:46.081 13:37:03 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:46.081 13:37:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:26:46.081 13:37:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:46.081 13:37:03 -- host/auth.sh@68 -- # digest=sha384 00:26:46.081 13:37:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:46.081 13:37:03 -- host/auth.sh@68 -- # keyid=0 00:26:46.081 13:37:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.081 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.081 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.081 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.081 13:37:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:46.081 13:37:03 -- nvmf/common.sh@717 -- # local ip 00:26:46.081 13:37:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:46.081 13:37:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:46.081 13:37:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.081 13:37:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.081 13:37:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:46.081 13:37:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.081 13:37:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:46.081 13:37:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:46.081 13:37:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:46.081 13:37:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:46.081 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.081 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.339 nvme0n1 00:26:46.339 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.339 13:37:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.339 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.339 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.339 13:37:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:46.339 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.339 13:37:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.339 13:37:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.339 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.339 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.339 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.339 13:37:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:46.339 13:37:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:46.339 13:37:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:46.339 13:37:03 -- host/auth.sh@44 -- # digest=sha384 00:26:46.339 13:37:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.339 13:37:03 -- host/auth.sh@44 -- # keyid=1 00:26:46.339 13:37:03 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:46.339 13:37:03 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:46.339 13:37:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:46.339 13:37:03 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:46.339 13:37:03 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:26:46.339 13:37:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:46.339 13:37:03 -- host/auth.sh@68 -- # digest=sha384 00:26:46.339 13:37:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:46.339 13:37:03 -- host/auth.sh@68 -- # keyid=1 00:26:46.339 13:37:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.339 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.339 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.339 13:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.339 13:37:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:46.339 13:37:03 -- nvmf/common.sh@717 -- # local ip 00:26:46.339 13:37:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:46.339 13:37:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:46.339 13:37:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.339 13:37:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.339 13:37:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:46.339 13:37:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.339 13:37:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:46.339 13:37:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:46.339 13:37:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:46.339 13:37:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:46.339 13:37:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.339 13:37:03 -- common/autotest_common.sh@10 -- # set +x 00:26:46.909 nvme0n1 00:26:46.909 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.909 13:37:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.909 13:37:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:46.909 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.909 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.909 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.909 13:37:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.909 13:37:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.909 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.909 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.909 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.909 13:37:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:46.909 13:37:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:46.909 13:37:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:46.909 13:37:04 -- host/auth.sh@44 -- # digest=sha384 00:26:46.909 13:37:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.909 13:37:04 -- host/auth.sh@44 -- # keyid=2 00:26:46.909 13:37:04 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:46.909 13:37:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:46.909 13:37:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:46.909 13:37:04 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:46.909 13:37:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:26:46.909 13:37:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:46.909 13:37:04 -- host/auth.sh@68 -- # digest=sha384 00:26:46.909 13:37:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:46.909 13:37:04 -- host/auth.sh@68 -- # keyid=2 00:26:46.909 13:37:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.909 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.909 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:46.909 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:46.909 13:37:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:46.909 13:37:04 -- nvmf/common.sh@717 -- # local ip 00:26:46.909 13:37:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:46.909 13:37:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:46.909 13:37:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.909 13:37:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.909 13:37:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:46.909 13:37:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.909 13:37:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:46.909 13:37:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:46.909 13:37:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:46.909 13:37:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:46.909 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:46.909 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.168 nvme0n1 00:26:47.168 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.168 13:37:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:47.168 13:37:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.168 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.168 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.168 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.168 13:37:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.168 13:37:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.168 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.168 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.168 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.168 13:37:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:47.168 13:37:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:47.168 13:37:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:47.168 13:37:04 -- host/auth.sh@44 -- # digest=sha384 00:26:47.168 13:37:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.168 13:37:04 -- host/auth.sh@44 -- # keyid=3 00:26:47.168 13:37:04 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:47.168 13:37:04 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:47.168 13:37:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:47.168 13:37:04 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:47.168 13:37:04 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:26:47.168 13:37:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:47.168 13:37:04 -- host/auth.sh@68 -- # digest=sha384 00:26:47.168 13:37:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:47.168 13:37:04 -- host/auth.sh@68 -- # keyid=3 00:26:47.168 13:37:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.168 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.168 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.168 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.168 13:37:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:47.168 13:37:04 -- nvmf/common.sh@717 -- # local ip 00:26:47.168 13:37:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:47.168 13:37:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:47.168 13:37:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.168 13:37:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.168 13:37:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:47.168 13:37:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.168 13:37:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:47.168 13:37:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:47.168 13:37:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:47.168 13:37:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:47.168 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.168 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 nvme0n1 00:26:47.736 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.736 13:37:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.736 13:37:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.736 13:37:04 -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 13:37:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:47.736 13:37:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.736 13:37:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.736 13:37:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.736 13:37:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.736 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 13:37:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.736 13:37:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:47.736 13:37:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:47.736 13:37:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:47.736 13:37:05 -- host/auth.sh@44 -- # digest=sha384 00:26:47.736 13:37:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.736 13:37:05 -- host/auth.sh@44 -- # keyid=4 00:26:47.736 13:37:05 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:47.736 13:37:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:47.736 13:37:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:47.736 13:37:05 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:47.736 13:37:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:26:47.736 13:37:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:47.736 13:37:05 -- host/auth.sh@68 -- # digest=sha384 00:26:47.736 13:37:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:47.736 13:37:05 -- host/auth.sh@68 -- # keyid=4 00:26:47.736 13:37:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.736 13:37:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.736 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:47.736 13:37:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.736 13:37:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:47.736 13:37:05 -- nvmf/common.sh@717 -- # local ip 00:26:47.736 13:37:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:47.736 13:37:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:47.736 13:37:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.736 13:37:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.736 13:37:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:47.736 13:37:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.736 13:37:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:47.736 13:37:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:47.736 13:37:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:47.736 13:37:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.736 13:37:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.736 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 nvme0n1 00:26:47.994 13:37:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:47.994 13:37:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:47.994 13:37:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.994 13:37:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:47.994 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:47.994 13:37:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.253 13:37:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.253 13:37:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.253 13:37:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.253 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.253 13:37:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.253 13:37:05 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.253 13:37:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:48.253 13:37:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:48.253 13:37:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:48.253 13:37:05 -- host/auth.sh@44 -- # digest=sha384 00:26:48.253 13:37:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.253 13:37:05 -- host/auth.sh@44 -- # keyid=0 00:26:48.253 13:37:05 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:48.253 13:37:05 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:48.253 13:37:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:48.253 13:37:05 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:48.253 13:37:05 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:26:48.253 13:37:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:48.253 13:37:05 -- host/auth.sh@68 -- # digest=sha384 00:26:48.253 13:37:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:48.253 13:37:05 -- host/auth.sh@68 -- # keyid=0 00:26:48.253 13:37:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:48.253 13:37:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.253 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.253 13:37:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.253 13:37:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:48.253 13:37:05 -- nvmf/common.sh@717 -- # local ip 00:26:48.253 13:37:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:48.253 13:37:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:48.253 13:37:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.253 13:37:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.253 13:37:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:48.253 13:37:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.253 13:37:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:48.253 13:37:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:48.253 13:37:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:48.253 13:37:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:48.253 13:37:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.253 13:37:05 -- common/autotest_common.sh@10 -- # set +x 00:26:48.820 nvme0n1 00:26:48.821 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.821 13:37:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.821 13:37:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:48.821 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.821 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:48.821 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.821 13:37:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.821 13:37:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.821 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.821 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:48.821 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.821 13:37:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:48.821 13:37:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:48.821 13:37:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:48.821 13:37:06 -- host/auth.sh@44 -- # digest=sha384 00:26:48.821 13:37:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.821 13:37:06 -- host/auth.sh@44 -- # keyid=1 00:26:48.821 13:37:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:48.821 13:37:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:48.821 13:37:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:48.821 13:37:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:48.821 13:37:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:26:48.821 13:37:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:48.821 13:37:06 -- host/auth.sh@68 -- # digest=sha384 00:26:48.821 13:37:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:48.821 13:37:06 -- host/auth.sh@68 -- # keyid=1 00:26:48.821 13:37:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:48.821 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.821 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:48.821 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.821 13:37:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:48.821 13:37:06 -- nvmf/common.sh@717 -- # local ip 00:26:48.821 13:37:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:48.821 13:37:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:48.821 13:37:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.821 13:37:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.821 13:37:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:48.821 13:37:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.821 13:37:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:48.821 13:37:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:48.821 13:37:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:48.821 13:37:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:48.821 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.821 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:49.388 nvme0n1 00:26:49.388 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.388 13:37:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.388 13:37:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:49.388 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.388 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:49.388 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.647 13:37:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.647 13:37:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.647 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.647 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:49.647 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.647 13:37:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:49.647 13:37:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:49.647 13:37:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:49.647 13:37:06 -- host/auth.sh@44 -- # digest=sha384 00:26:49.647 13:37:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.647 13:37:06 -- host/auth.sh@44 -- # keyid=2 00:26:49.647 13:37:06 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:49.647 13:37:06 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:49.647 13:37:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:49.647 13:37:06 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:49.647 13:37:06 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:26:49.647 13:37:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:49.647 13:37:06 -- host/auth.sh@68 -- # digest=sha384 00:26:49.647 13:37:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:49.647 13:37:06 -- host/auth.sh@68 -- # keyid=2 00:26:49.647 13:37:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:49.647 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.647 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:49.647 13:37:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.647 13:37:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:49.647 13:37:06 -- nvmf/common.sh@717 -- # local ip 00:26:49.647 13:37:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:49.647 13:37:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:49.647 13:37:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.647 13:37:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.647 13:37:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:49.647 13:37:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.647 13:37:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:49.647 13:37:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:49.647 13:37:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:49.647 13:37:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:49.647 13:37:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.647 13:37:06 -- common/autotest_common.sh@10 -- # set +x 00:26:50.214 nvme0n1 00:26:50.214 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.214 13:37:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.214 13:37:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:50.214 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.214 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.214 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.214 13:37:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.214 13:37:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.214 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.214 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.214 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.214 13:37:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:50.214 13:37:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:50.214 13:37:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:50.214 13:37:07 -- host/auth.sh@44 -- # digest=sha384 00:26:50.214 13:37:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.214 13:37:07 -- host/auth.sh@44 -- # keyid=3 00:26:50.214 13:37:07 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:50.214 13:37:07 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:50.214 13:37:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:50.214 13:37:07 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:50.214 13:37:07 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:26:50.214 13:37:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:50.214 13:37:07 -- host/auth.sh@68 -- # digest=sha384 00:26:50.214 13:37:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:50.214 13:37:07 -- host/auth.sh@68 -- # keyid=3 00:26:50.214 13:37:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.214 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.214 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.214 13:37:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.214 13:37:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:50.214 13:37:07 -- nvmf/common.sh@717 -- # local ip 00:26:50.214 13:37:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:50.214 13:37:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:50.214 13:37:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.214 13:37:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.214 13:37:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:50.214 13:37:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.214 13:37:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:50.214 13:37:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:50.214 13:37:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:50.214 13:37:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:50.214 13:37:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.214 13:37:07 -- common/autotest_common.sh@10 -- # set +x 00:26:50.781 nvme0n1 00:26:50.781 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.781 13:37:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.781 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.781 13:37:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:50.781 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:50.781 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.039 13:37:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.039 13:37:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.039 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.039 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.039 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.039 13:37:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:51.039 13:37:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:51.039 13:37:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:51.039 13:37:08 -- host/auth.sh@44 -- # digest=sha384 00:26:51.039 13:37:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.039 13:37:08 -- host/auth.sh@44 -- # keyid=4 00:26:51.039 13:37:08 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:51.039 13:37:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:26:51.039 13:37:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:51.039 13:37:08 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:51.040 13:37:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:26:51.040 13:37:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:51.040 13:37:08 -- host/auth.sh@68 -- # digest=sha384 00:26:51.040 13:37:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:51.040 13:37:08 -- host/auth.sh@68 -- # keyid=4 00:26:51.040 13:37:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:51.040 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.040 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.040 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.040 13:37:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:51.040 13:37:08 -- nvmf/common.sh@717 -- # local ip 00:26:51.040 13:37:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:51.040 13:37:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:51.040 13:37:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.040 13:37:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.040 13:37:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:51.040 13:37:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.040 13:37:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:51.040 13:37:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:51.040 13:37:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:51.040 13:37:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.040 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.040 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.605 nvme0n1 00:26:51.605 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.605 13:37:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.605 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.605 13:37:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:51.605 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.605 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.605 13:37:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.605 13:37:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.605 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.605 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.605 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.605 13:37:08 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:26:51.605 13:37:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:51.605 13:37:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:51.605 13:37:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:51.605 13:37:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:51.605 13:37:08 -- host/auth.sh@44 -- # digest=sha512 00:26:51.605 13:37:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.605 13:37:08 -- host/auth.sh@44 -- # keyid=0 00:26:51.605 13:37:08 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:51.605 13:37:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:51.605 13:37:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:51.605 13:37:08 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:51.605 13:37:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:26:51.605 13:37:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:51.605 13:37:08 -- host/auth.sh@68 -- # digest=sha512 00:26:51.605 13:37:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:51.605 13:37:08 -- host/auth.sh@68 -- # keyid=0 00:26:51.605 13:37:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.605 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.605 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.605 13:37:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.605 13:37:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:51.605 13:37:08 -- nvmf/common.sh@717 -- # local ip 00:26:51.605 13:37:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:51.605 13:37:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:51.605 13:37:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.605 13:37:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.605 13:37:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:51.605 13:37:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.606 13:37:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:51.606 13:37:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:51.606 13:37:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:51.606 13:37:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:51.606 13:37:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.606 13:37:08 -- common/autotest_common.sh@10 -- # set +x 00:26:51.606 nvme0n1 00:26:51.606 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.606 13:37:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.606 13:37:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:51.606 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.606 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.863 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.863 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:51.863 13:37:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:51.863 13:37:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:51.863 13:37:09 -- host/auth.sh@44 -- # digest=sha512 00:26:51.863 13:37:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.863 13:37:09 -- host/auth.sh@44 -- # keyid=1 00:26:51.863 13:37:09 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:51.863 13:37:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:51.863 13:37:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:51.863 13:37:09 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:51.863 13:37:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:26:51.863 13:37:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:51.863 13:37:09 -- host/auth.sh@68 -- # digest=sha512 00:26:51.863 13:37:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:51.863 13:37:09 -- host/auth.sh@68 -- # keyid=1 00:26:51.863 13:37:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.863 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.863 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:51.863 13:37:09 -- nvmf/common.sh@717 -- # local ip 00:26:51.863 13:37:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:51.863 13:37:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:51.863 13:37:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.863 13:37:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.863 13:37:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:51.863 13:37:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.863 13:37:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:51.863 13:37:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:51.863 13:37:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:51.863 13:37:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:51.863 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.863 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 nvme0n1 00:26:51.863 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:51.863 13:37:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.863 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.863 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.863 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.863 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.863 13:37:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:51.863 13:37:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:51.863 13:37:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:51.863 13:37:09 -- host/auth.sh@44 -- # digest=sha512 00:26:51.863 13:37:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.863 13:37:09 -- host/auth.sh@44 -- # keyid=2 00:26:51.863 13:37:09 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:51.863 13:37:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:51.863 13:37:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:51.863 13:37:09 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:51.863 13:37:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:26:51.863 13:37:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:51.863 13:37:09 -- host/auth.sh@68 -- # digest=sha512 00:26:51.863 13:37:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:51.863 13:37:09 -- host/auth.sh@68 -- # keyid=2 00:26:51.863 13:37:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:51.863 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.863 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:51.863 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.122 13:37:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.122 13:37:09 -- nvmf/common.sh@717 -- # local ip 00:26:52.122 13:37:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.122 13:37:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.122 13:37:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.122 13:37:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.122 13:37:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.122 13:37:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.122 13:37:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.122 13:37:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.122 13:37:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.122 13:37:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:52.122 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.122 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.122 nvme0n1 00:26:52.122 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.122 13:37:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.122 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.122 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.122 13:37:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.122 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.122 13:37:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.122 13:37:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.122 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.122 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.122 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.122 13:37:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.122 13:37:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:52.122 13:37:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.122 13:37:09 -- host/auth.sh@44 -- # digest=sha512 00:26:52.122 13:37:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.122 13:37:09 -- host/auth.sh@44 -- # keyid=3 00:26:52.122 13:37:09 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:52.122 13:37:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.122 13:37:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:52.122 13:37:09 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:52.122 13:37:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:26:52.122 13:37:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.122 13:37:09 -- host/auth.sh@68 -- # digest=sha512 00:26:52.122 13:37:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:52.122 13:37:09 -- host/auth.sh@68 -- # keyid=3 00:26:52.122 13:37:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.122 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.122 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.122 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.122 13:37:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.122 13:37:09 -- nvmf/common.sh@717 -- # local ip 00:26:52.122 13:37:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.122 13:37:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.122 13:37:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.122 13:37:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.122 13:37:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.122 13:37:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.122 13:37:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.122 13:37:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.122 13:37:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.122 13:37:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:52.122 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.122 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.380 nvme0n1 00:26:52.380 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.380 13:37:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.380 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.380 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.380 13:37:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.380 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.380 13:37:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.380 13:37:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.380 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.380 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.380 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.380 13:37:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.380 13:37:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:52.380 13:37:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.380 13:37:09 -- host/auth.sh@44 -- # digest=sha512 00:26:52.380 13:37:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.380 13:37:09 -- host/auth.sh@44 -- # keyid=4 00:26:52.380 13:37:09 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:52.380 13:37:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.380 13:37:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:26:52.380 13:37:09 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:52.380 13:37:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:26:52.380 13:37:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.380 13:37:09 -- host/auth.sh@68 -- # digest=sha512 00:26:52.380 13:37:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:26:52.380 13:37:09 -- host/auth.sh@68 -- # keyid=4 00:26:52.380 13:37:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.380 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.380 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.380 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.380 13:37:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.380 13:37:09 -- nvmf/common.sh@717 -- # local ip 00:26:52.380 13:37:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.380 13:37:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.380 13:37:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.380 13:37:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.380 13:37:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.380 13:37:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.380 13:37:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.380 13:37:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.380 13:37:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.380 13:37:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.380 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.380 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.380 nvme0n1 00:26:52.380 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.380 13:37:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.380 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.380 13:37:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.380 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.380 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.638 13:37:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.638 13:37:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.638 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.638 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.638 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.638 13:37:09 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.638 13:37:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.638 13:37:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:52.638 13:37:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.638 13:37:09 -- host/auth.sh@44 -- # digest=sha512 00:26:52.638 13:37:09 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.638 13:37:09 -- host/auth.sh@44 -- # keyid=0 00:26:52.638 13:37:09 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:52.638 13:37:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.638 13:37:09 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:52.638 13:37:09 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:52.638 13:37:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:26:52.638 13:37:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.638 13:37:09 -- host/auth.sh@68 -- # digest=sha512 00:26:52.638 13:37:09 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:52.638 13:37:09 -- host/auth.sh@68 -- # keyid=0 00:26:52.638 13:37:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.638 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.638 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.638 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.638 13:37:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.638 13:37:09 -- nvmf/common.sh@717 -- # local ip 00:26:52.638 13:37:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.638 13:37:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.638 13:37:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.638 13:37:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.638 13:37:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.638 13:37:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.638 13:37:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.638 13:37:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.638 13:37:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.638 13:37:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:52.638 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.638 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.638 nvme0n1 00:26:52.638 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.638 13:37:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.638 13:37:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.638 13:37:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.638 13:37:09 -- common/autotest_common.sh@10 -- # set +x 00:26:52.638 13:37:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.638 13:37:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.638 13:37:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.638 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.638 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.638 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.638 13:37:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.638 13:37:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:52.638 13:37:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.638 13:37:10 -- host/auth.sh@44 -- # digest=sha512 00:26:52.638 13:37:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.638 13:37:10 -- host/auth.sh@44 -- # keyid=1 00:26:52.638 13:37:10 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:52.638 13:37:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.638 13:37:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:52.639 13:37:10 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:52.639 13:37:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:26:52.639 13:37:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.639 13:37:10 -- host/auth.sh@68 -- # digest=sha512 00:26:52.639 13:37:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:52.639 13:37:10 -- host/auth.sh@68 -- # keyid=1 00:26:52.639 13:37:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.639 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.639 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.639 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.639 13:37:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.639 13:37:10 -- nvmf/common.sh@717 -- # local ip 00:26:52.639 13:37:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.639 13:37:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.639 13:37:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.639 13:37:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.639 13:37:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.639 13:37:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.639 13:37:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.639 13:37:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.639 13:37:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.639 13:37:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:52.639 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.639 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.896 nvme0n1 00:26:52.896 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.896 13:37:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.896 13:37:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:52.896 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.896 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.896 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.896 13:37:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.896 13:37:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.896 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.896 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.896 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.896 13:37:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:52.896 13:37:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:52.896 13:37:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:52.896 13:37:10 -- host/auth.sh@44 -- # digest=sha512 00:26:52.896 13:37:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:52.896 13:37:10 -- host/auth.sh@44 -- # keyid=2 00:26:52.896 13:37:10 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:52.896 13:37:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:52.896 13:37:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:52.896 13:37:10 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:52.896 13:37:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:26:52.896 13:37:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:52.896 13:37:10 -- host/auth.sh@68 -- # digest=sha512 00:26:52.896 13:37:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:52.896 13:37:10 -- host/auth.sh@68 -- # keyid=2 00:26:52.896 13:37:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:52.896 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.896 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:52.896 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.896 13:37:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:52.896 13:37:10 -- nvmf/common.sh@717 -- # local ip 00:26:52.896 13:37:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:52.896 13:37:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:52.896 13:37:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.896 13:37:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.896 13:37:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:52.896 13:37:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.896 13:37:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:52.896 13:37:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:52.896 13:37:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:52.896 13:37:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:52.896 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.896 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.154 nvme0n1 00:26:53.154 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.154 13:37:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.154 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.154 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.154 13:37:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.154 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.154 13:37:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.154 13:37:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.154 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.154 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.154 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.154 13:37:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.154 13:37:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:53.154 13:37:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.154 13:37:10 -- host/auth.sh@44 -- # digest=sha512 00:26:53.154 13:37:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.154 13:37:10 -- host/auth.sh@44 -- # keyid=3 00:26:53.154 13:37:10 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:53.154 13:37:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.154 13:37:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:53.154 13:37:10 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:53.154 13:37:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:26:53.154 13:37:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.154 13:37:10 -- host/auth.sh@68 -- # digest=sha512 00:26:53.154 13:37:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:53.154 13:37:10 -- host/auth.sh@68 -- # keyid=3 00:26:53.154 13:37:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.154 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.154 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.154 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.154 13:37:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.154 13:37:10 -- nvmf/common.sh@717 -- # local ip 00:26:53.154 13:37:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.154 13:37:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.154 13:37:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.154 13:37:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.154 13:37:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.154 13:37:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.154 13:37:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.154 13:37:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.154 13:37:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.154 13:37:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:53.154 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.154 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.154 nvme0n1 00:26:53.154 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.154 13:37:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.154 13:37:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.154 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.154 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.411 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.411 13:37:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.411 13:37:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.411 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.411 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.411 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.411 13:37:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.411 13:37:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:53.411 13:37:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.411 13:37:10 -- host/auth.sh@44 -- # digest=sha512 00:26:53.411 13:37:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.411 13:37:10 -- host/auth.sh@44 -- # keyid=4 00:26:53.411 13:37:10 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:53.411 13:37:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.411 13:37:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:26:53.411 13:37:10 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:53.411 13:37:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:26:53.411 13:37:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.411 13:37:10 -- host/auth.sh@68 -- # digest=sha512 00:26:53.411 13:37:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:26:53.411 13:37:10 -- host/auth.sh@68 -- # keyid=4 00:26:53.411 13:37:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.411 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.411 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.411 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.411 13:37:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.411 13:37:10 -- nvmf/common.sh@717 -- # local ip 00:26:53.411 13:37:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.411 13:37:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.411 13:37:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.411 13:37:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.411 13:37:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.411 13:37:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.411 13:37:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.411 13:37:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.411 13:37:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.411 13:37:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.411 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.411 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.411 nvme0n1 00:26:53.411 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.411 13:37:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.411 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.411 13:37:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.411 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.411 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.411 13:37:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.411 13:37:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.411 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.411 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.668 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.668 13:37:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.668 13:37:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.668 13:37:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:53.668 13:37:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.668 13:37:10 -- host/auth.sh@44 -- # digest=sha512 00:26:53.668 13:37:10 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:53.668 13:37:10 -- host/auth.sh@44 -- # keyid=0 00:26:53.668 13:37:10 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:53.668 13:37:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.668 13:37:10 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:53.668 13:37:10 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:53.668 13:37:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:26:53.668 13:37:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.668 13:37:10 -- host/auth.sh@68 -- # digest=sha512 00:26:53.668 13:37:10 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:53.668 13:37:10 -- host/auth.sh@68 -- # keyid=0 00:26:53.668 13:37:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:53.668 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.668 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.668 13:37:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.668 13:37:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.668 13:37:10 -- nvmf/common.sh@717 -- # local ip 00:26:53.668 13:37:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.668 13:37:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.668 13:37:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.668 13:37:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.668 13:37:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.668 13:37:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.668 13:37:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.668 13:37:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.668 13:37:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.668 13:37:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:53.668 13:37:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.668 13:37:10 -- common/autotest_common.sh@10 -- # set +x 00:26:53.668 nvme0n1 00:26:53.668 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.668 13:37:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.668 13:37:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.668 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.668 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:53.668 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.925 13:37:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.925 13:37:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.925 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.925 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:53.925 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.925 13:37:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:53.925 13:37:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:53.925 13:37:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:53.925 13:37:11 -- host/auth.sh@44 -- # digest=sha512 00:26:53.925 13:37:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:53.925 13:37:11 -- host/auth.sh@44 -- # keyid=1 00:26:53.925 13:37:11 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:53.925 13:37:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:53.925 13:37:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:53.925 13:37:11 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:53.925 13:37:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:26:53.925 13:37:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:53.925 13:37:11 -- host/auth.sh@68 -- # digest=sha512 00:26:53.925 13:37:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:53.925 13:37:11 -- host/auth.sh@68 -- # keyid=1 00:26:53.925 13:37:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:53.925 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.925 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:53.925 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.925 13:37:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:53.925 13:37:11 -- nvmf/common.sh@717 -- # local ip 00:26:53.925 13:37:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:53.925 13:37:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:53.925 13:37:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.925 13:37:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.925 13:37:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:53.925 13:37:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.925 13:37:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:53.925 13:37:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:53.925 13:37:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:53.925 13:37:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:53.925 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.925 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:53.925 nvme0n1 00:26:53.925 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.925 13:37:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.925 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.925 13:37:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:53.925 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.183 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.183 13:37:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.183 13:37:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.183 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.183 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.183 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.183 13:37:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:54.183 13:37:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:54.183 13:37:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:54.183 13:37:11 -- host/auth.sh@44 -- # digest=sha512 00:26:54.183 13:37:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.183 13:37:11 -- host/auth.sh@44 -- # keyid=2 00:26:54.183 13:37:11 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:54.183 13:37:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:54.183 13:37:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:54.183 13:37:11 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:54.183 13:37:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:26:54.183 13:37:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:54.183 13:37:11 -- host/auth.sh@68 -- # digest=sha512 00:26:54.183 13:37:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:54.183 13:37:11 -- host/auth.sh@68 -- # keyid=2 00:26:54.183 13:37:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.183 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.183 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.183 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.183 13:37:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:54.183 13:37:11 -- nvmf/common.sh@717 -- # local ip 00:26:54.183 13:37:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:54.183 13:37:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:54.183 13:37:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.183 13:37:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.183 13:37:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:54.183 13:37:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.183 13:37:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:54.183 13:37:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:54.183 13:37:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:54.183 13:37:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:54.183 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.183 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.441 nvme0n1 00:26:54.441 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.441 13:37:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.441 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.441 13:37:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:54.441 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.441 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.441 13:37:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.441 13:37:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.441 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.441 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.441 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.441 13:37:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:54.441 13:37:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:54.441 13:37:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:54.441 13:37:11 -- host/auth.sh@44 -- # digest=sha512 00:26:54.441 13:37:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.441 13:37:11 -- host/auth.sh@44 -- # keyid=3 00:26:54.441 13:37:11 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:54.441 13:37:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:54.441 13:37:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:54.441 13:37:11 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:54.441 13:37:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:26:54.441 13:37:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:54.441 13:37:11 -- host/auth.sh@68 -- # digest=sha512 00:26:54.441 13:37:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:54.441 13:37:11 -- host/auth.sh@68 -- # keyid=3 00:26:54.441 13:37:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.441 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.441 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.441 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.441 13:37:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:54.441 13:37:11 -- nvmf/common.sh@717 -- # local ip 00:26:54.441 13:37:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:54.441 13:37:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:54.441 13:37:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.441 13:37:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.441 13:37:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:54.441 13:37:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.441 13:37:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:54.441 13:37:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:54.441 13:37:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:54.441 13:37:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:54.441 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.441 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.700 nvme0n1 00:26:54.700 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.700 13:37:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.700 13:37:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:54.700 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.700 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.700 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.700 13:37:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.700 13:37:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.700 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.700 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.700 13:37:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.700 13:37:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:54.700 13:37:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:54.700 13:37:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:54.700 13:37:11 -- host/auth.sh@44 -- # digest=sha512 00:26:54.700 13:37:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.700 13:37:11 -- host/auth.sh@44 -- # keyid=4 00:26:54.700 13:37:11 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:54.700 13:37:11 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:54.700 13:37:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:26:54.700 13:37:11 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:54.700 13:37:11 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:26:54.700 13:37:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:54.700 13:37:11 -- host/auth.sh@68 -- # digest=sha512 00:26:54.700 13:37:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:26:54.700 13:37:11 -- host/auth.sh@68 -- # keyid=4 00:26:54.700 13:37:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.700 13:37:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.700 13:37:11 -- common/autotest_common.sh@10 -- # set +x 00:26:54.700 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.700 13:37:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:54.700 13:37:12 -- nvmf/common.sh@717 -- # local ip 00:26:54.700 13:37:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:54.700 13:37:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:54.700 13:37:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.700 13:37:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.700 13:37:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:54.700 13:37:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.700 13:37:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:54.700 13:37:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:54.700 13:37:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:54.700 13:37:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.700 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.700 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:54.959 nvme0n1 00:26:54.959 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.959 13:37:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.959 13:37:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:54.959 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.959 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:54.959 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.959 13:37:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.959 13:37:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.959 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.959 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:54.959 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.959 13:37:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.959 13:37:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:54.959 13:37:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:54.959 13:37:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:54.959 13:37:12 -- host/auth.sh@44 -- # digest=sha512 00:26:54.959 13:37:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.959 13:37:12 -- host/auth.sh@44 -- # keyid=0 00:26:54.959 13:37:12 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:54.959 13:37:12 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:54.959 13:37:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:54.959 13:37:12 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:54.959 13:37:12 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:26:54.959 13:37:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:54.959 13:37:12 -- host/auth.sh@68 -- # digest=sha512 00:26:54.959 13:37:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:54.959 13:37:12 -- host/auth.sh@68 -- # keyid=0 00:26:54.959 13:37:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:54.959 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.959 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:54.959 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:54.959 13:37:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:54.959 13:37:12 -- nvmf/common.sh@717 -- # local ip 00:26:54.959 13:37:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:54.959 13:37:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:54.959 13:37:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.959 13:37:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.959 13:37:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:54.959 13:37:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.959 13:37:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:54.959 13:37:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:54.959 13:37:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:54.959 13:37:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:54.959 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:54.959 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:55.217 nvme0n1 00:26:55.217 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.217 13:37:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.217 13:37:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:55.217 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.217 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:55.217 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.475 13:37:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.475 13:37:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.475 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.475 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:55.475 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.475 13:37:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:55.475 13:37:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:55.475 13:37:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:55.475 13:37:12 -- host/auth.sh@44 -- # digest=sha512 00:26:55.475 13:37:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.475 13:37:12 -- host/auth.sh@44 -- # keyid=1 00:26:55.475 13:37:12 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:55.475 13:37:12 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:55.475 13:37:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:55.475 13:37:12 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:55.475 13:37:12 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:26:55.475 13:37:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:55.475 13:37:12 -- host/auth.sh@68 -- # digest=sha512 00:26:55.475 13:37:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:55.475 13:37:12 -- host/auth.sh@68 -- # keyid=1 00:26:55.476 13:37:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:55.476 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.476 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:55.476 13:37:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.476 13:37:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:55.476 13:37:12 -- nvmf/common.sh@717 -- # local ip 00:26:55.476 13:37:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:55.476 13:37:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:55.476 13:37:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.476 13:37:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.476 13:37:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:55.476 13:37:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.476 13:37:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:55.476 13:37:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:55.476 13:37:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:55.476 13:37:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:55.476 13:37:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.476 13:37:12 -- common/autotest_common.sh@10 -- # set +x 00:26:55.734 nvme0n1 00:26:55.734 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.734 13:37:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.734 13:37:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:55.734 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.734 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:55.734 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.734 13:37:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.734 13:37:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.734 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.734 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:55.734 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.734 13:37:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:55.734 13:37:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:55.734 13:37:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:55.734 13:37:13 -- host/auth.sh@44 -- # digest=sha512 00:26:55.734 13:37:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.734 13:37:13 -- host/auth.sh@44 -- # keyid=2 00:26:55.734 13:37:13 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:55.734 13:37:13 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:55.734 13:37:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:55.734 13:37:13 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:55.734 13:37:13 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:26:55.734 13:37:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:55.734 13:37:13 -- host/auth.sh@68 -- # digest=sha512 00:26:55.734 13:37:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:55.734 13:37:13 -- host/auth.sh@68 -- # keyid=2 00:26:55.734 13:37:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:55.734 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.734 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:55.734 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.734 13:37:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:55.734 13:37:13 -- nvmf/common.sh@717 -- # local ip 00:26:55.734 13:37:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:55.734 13:37:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:55.734 13:37:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.734 13:37:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.735 13:37:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:55.735 13:37:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.735 13:37:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:55.735 13:37:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:55.735 13:37:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:55.735 13:37:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:55.735 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.735 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.304 nvme0n1 00:26:56.304 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.304 13:37:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.304 13:37:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.304 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.304 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.304 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.304 13:37:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.304 13:37:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.304 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.304 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.304 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.304 13:37:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.304 13:37:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:56.304 13:37:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.304 13:37:13 -- host/auth.sh@44 -- # digest=sha512 00:26:56.304 13:37:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.304 13:37:13 -- host/auth.sh@44 -- # keyid=3 00:26:56.304 13:37:13 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:56.304 13:37:13 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:56.304 13:37:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:56.304 13:37:13 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:56.304 13:37:13 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:26:56.304 13:37:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.304 13:37:13 -- host/auth.sh@68 -- # digest=sha512 00:26:56.304 13:37:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:56.304 13:37:13 -- host/auth.sh@68 -- # keyid=3 00:26:56.304 13:37:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.304 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.304 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.304 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.304 13:37:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.304 13:37:13 -- nvmf/common.sh@717 -- # local ip 00:26:56.304 13:37:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.304 13:37:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.304 13:37:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.304 13:37:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.304 13:37:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.304 13:37:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.304 13:37:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.304 13:37:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.304 13:37:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.304 13:37:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:56.304 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.304 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.567 nvme0n1 00:26:56.567 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.567 13:37:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.567 13:37:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:56.567 13:37:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.567 13:37:13 -- common/autotest_common.sh@10 -- # set +x 00:26:56.567 13:37:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.567 13:37:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.567 13:37:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.567 13:37:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.567 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:26:56.826 13:37:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.826 13:37:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:56.826 13:37:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:56.826 13:37:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:56.826 13:37:14 -- host/auth.sh@44 -- # digest=sha512 00:26:56.826 13:37:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.826 13:37:14 -- host/auth.sh@44 -- # keyid=4 00:26:56.826 13:37:14 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:56.826 13:37:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:56.826 13:37:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:26:56.826 13:37:14 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:56.826 13:37:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:26:56.826 13:37:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:56.826 13:37:14 -- host/auth.sh@68 -- # digest=sha512 00:26:56.826 13:37:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:26:56.826 13:37:14 -- host/auth.sh@68 -- # keyid=4 00:26:56.826 13:37:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.826 13:37:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.826 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:26:56.826 13:37:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:56.826 13:37:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:56.826 13:37:14 -- nvmf/common.sh@717 -- # local ip 00:26:56.826 13:37:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:56.826 13:37:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:56.826 13:37:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.826 13:37:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.826 13:37:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:56.826 13:37:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.826 13:37:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:56.826 13:37:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:56.826 13:37:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:56.826 13:37:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.826 13:37:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:56.826 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:26:57.085 nvme0n1 00:26:57.085 13:37:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.085 13:37:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.085 13:37:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.085 13:37:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:57.085 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:26:57.085 13:37:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.085 13:37:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.085 13:37:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.085 13:37:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.085 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:26:57.085 13:37:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.085 13:37:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.085 13:37:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.085 13:37:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:57.085 13:37:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.085 13:37:14 -- host/auth.sh@44 -- # digest=sha512 00:26:57.085 13:37:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.085 13:37:14 -- host/auth.sh@44 -- # keyid=0 00:26:57.085 13:37:14 -- host/auth.sh@45 -- # key=DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:57.085 13:37:14 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:57.085 13:37:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:57.085 13:37:14 -- host/auth.sh@49 -- # echo DHHC-1:00:M2E4NzA1Yjc4YTFlNzhlNWRjNTBkY2UxM2RkN2FlMjd0DIKq: 00:26:57.085 13:37:14 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:26:57.085 13:37:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.085 13:37:14 -- host/auth.sh@68 -- # digest=sha512 00:26:57.085 13:37:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:57.085 13:37:14 -- host/auth.sh@68 -- # keyid=0 00:26:57.085 13:37:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:57.085 13:37:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.085 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:26:57.085 13:37:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.085 13:37:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.085 13:37:14 -- nvmf/common.sh@717 -- # local ip 00:26:57.085 13:37:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.085 13:37:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.085 13:37:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.085 13:37:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.085 13:37:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.085 13:37:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.085 13:37:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.085 13:37:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.085 13:37:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.085 13:37:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:26:57.085 13:37:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.085 13:37:14 -- common/autotest_common.sh@10 -- # set +x 00:26:57.653 nvme0n1 00:26:57.653 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.653 13:37:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.653 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.653 13:37:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:57.653 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:57.653 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.911 13:37:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.911 13:37:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.911 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.911 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:57.911 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.911 13:37:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:57.911 13:37:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:57.911 13:37:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:57.911 13:37:15 -- host/auth.sh@44 -- # digest=sha512 00:26:57.911 13:37:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.911 13:37:15 -- host/auth.sh@44 -- # keyid=1 00:26:57.911 13:37:15 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:57.911 13:37:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:57.911 13:37:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:57.911 13:37:15 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:26:57.911 13:37:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:26:57.911 13:37:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:57.911 13:37:15 -- host/auth.sh@68 -- # digest=sha512 00:26:57.911 13:37:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:57.911 13:37:15 -- host/auth.sh@68 -- # keyid=1 00:26:57.911 13:37:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:57.911 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.911 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:57.911 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:57.911 13:37:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:57.911 13:37:15 -- nvmf/common.sh@717 -- # local ip 00:26:57.911 13:37:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:57.911 13:37:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:57.911 13:37:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.911 13:37:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.912 13:37:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:57.912 13:37:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.912 13:37:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:57.912 13:37:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:57.912 13:37:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:57.912 13:37:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:26:57.912 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:57.912 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:58.479 nvme0n1 00:26:58.479 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.479 13:37:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.479 13:37:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:58.479 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.479 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:58.479 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.479 13:37:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.479 13:37:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.479 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.479 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:58.479 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.479 13:37:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:58.479 13:37:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:58.479 13:37:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:58.479 13:37:15 -- host/auth.sh@44 -- # digest=sha512 00:26:58.479 13:37:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.479 13:37:15 -- host/auth.sh@44 -- # keyid=2 00:26:58.479 13:37:15 -- host/auth.sh@45 -- # key=DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:58.479 13:37:15 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:58.479 13:37:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:58.479 13:37:15 -- host/auth.sh@49 -- # echo DHHC-1:01:M2Y5NzhlNTcwYWE3OWY5MWVjOTQ3MTU1ODM3MTQ5YTOD3iRB: 00:26:58.479 13:37:15 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:26:58.479 13:37:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:58.479 13:37:15 -- host/auth.sh@68 -- # digest=sha512 00:26:58.480 13:37:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:58.480 13:37:15 -- host/auth.sh@68 -- # keyid=2 00:26:58.480 13:37:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:58.480 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.480 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:58.480 13:37:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:58.480 13:37:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:58.480 13:37:15 -- nvmf/common.sh@717 -- # local ip 00:26:58.480 13:37:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:58.480 13:37:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:58.480 13:37:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.480 13:37:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.480 13:37:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:58.480 13:37:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.480 13:37:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:58.480 13:37:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:58.480 13:37:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:58.480 13:37:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:58.480 13:37:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:58.480 13:37:15 -- common/autotest_common.sh@10 -- # set +x 00:26:59.047 nvme0n1 00:26:59.047 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.047 13:37:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.048 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.048 13:37:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:59.048 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:26:59.048 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.048 13:37:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.048 13:37:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.048 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.048 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:26:59.306 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.306 13:37:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:59.306 13:37:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:59.306 13:37:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:59.306 13:37:16 -- host/auth.sh@44 -- # digest=sha512 00:26:59.306 13:37:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.306 13:37:16 -- host/auth.sh@44 -- # keyid=3 00:26:59.306 13:37:16 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:59.306 13:37:16 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:59.306 13:37:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:59.306 13:37:16 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzc0MmNkYmI0OTI5ODkzYTM4ZmUyM2FiYzk5MTkwZWFiY2QwODYyNjQ5ZGIxZjhhReIJww==: 00:26:59.306 13:37:16 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:26:59.306 13:37:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:59.306 13:37:16 -- host/auth.sh@68 -- # digest=sha512 00:26:59.306 13:37:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:59.306 13:37:16 -- host/auth.sh@68 -- # keyid=3 00:26:59.306 13:37:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.306 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.306 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:26:59.306 13:37:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.306 13:37:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:59.306 13:37:16 -- nvmf/common.sh@717 -- # local ip 00:26:59.306 13:37:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:59.306 13:37:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:59.306 13:37:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.306 13:37:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.306 13:37:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:59.306 13:37:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.306 13:37:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:59.306 13:37:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:59.306 13:37:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:59.306 13:37:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:26:59.306 13:37:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.306 13:37:16 -- common/autotest_common.sh@10 -- # set +x 00:26:59.873 nvme0n1 00:26:59.873 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.873 13:37:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.873 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.873 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:26:59.873 13:37:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:26:59.873 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.873 13:37:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.873 13:37:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.873 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.873 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:26:59.873 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.873 13:37:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:26:59.873 13:37:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:59.873 13:37:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:26:59.873 13:37:17 -- host/auth.sh@44 -- # digest=sha512 00:26:59.873 13:37:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.873 13:37:17 -- host/auth.sh@44 -- # keyid=4 00:26:59.873 13:37:17 -- host/auth.sh@45 -- # key=DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:59.873 13:37:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:26:59.873 13:37:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:26:59.873 13:37:17 -- host/auth.sh@49 -- # echo DHHC-1:03:ZTY2NTc5ZTM0ZmI2NmZmOTZlOTdkZWJkZDAxODFlNzIxZGIxNGE3MzgwZmEwZDlkOWE4N2NmZWE2NzZiODRjN08MTkQ=: 00:26:59.873 13:37:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:26:59.873 13:37:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:26:59.873 13:37:17 -- host/auth.sh@68 -- # digest=sha512 00:26:59.873 13:37:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:26:59.873 13:37:17 -- host/auth.sh@68 -- # keyid=4 00:26:59.873 13:37:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.873 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.873 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:26:59.873 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:59.873 13:37:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:26:59.873 13:37:17 -- nvmf/common.sh@717 -- # local ip 00:26:59.873 13:37:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:26:59.873 13:37:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:26:59.873 13:37:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.873 13:37:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.873 13:37:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:26:59.873 13:37:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.873 13:37:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:26:59.873 13:37:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:26:59.873 13:37:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:26:59.873 13:37:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.873 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:59.873 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.439 nvme0n1 00:27:00.439 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.439 13:37:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.439 13:37:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:00.439 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.439 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.439 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.439 13:37:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.439 13:37:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.439 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.439 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.439 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.439 13:37:17 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:00.439 13:37:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:00.439 13:37:17 -- host/auth.sh@44 -- # digest=sha256 00:27:00.439 13:37:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.439 13:37:17 -- host/auth.sh@44 -- # keyid=1 00:27:00.439 13:37:17 -- host/auth.sh@45 -- # key=DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:27:00.439 13:37:17 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:00.439 13:37:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:00.439 13:37:17 -- host/auth.sh@49 -- # echo DHHC-1:00:ZThkMDU4NzE3OWRkYjEzMGQ3OGM2Mzc3ZjFhZjU5NzI0N2QzYjA3MzlkZWQyNGYz29PSWA==: 00:27:00.439 13:37:17 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:00.439 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.439 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.439 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.439 13:37:17 -- host/auth.sh@119 -- # get_main_ns_ip 00:27:00.439 13:37:17 -- nvmf/common.sh@717 -- # local ip 00:27:00.439 13:37:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:00.439 13:37:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:00.439 13:37:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.439 13:37:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.439 13:37:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:00.439 13:37:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.439 13:37:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:00.439 13:37:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:00.439 13:37:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:00.439 13:37:17 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:00.439 13:37:17 -- common/autotest_common.sh@638 -- # local es=0 00:27:00.439 13:37:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:00.439 13:37:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:00.439 13:37:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:00.439 13:37:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:00.439 13:37:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:00.439 13:37:17 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:00.439 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.439 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.700 2024/04/26 13:37:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:27:00.700 request: 00:27:00.700 { 00:27:00.700 "method": "bdev_nvme_attach_controller", 00:27:00.700 "params": { 00:27:00.700 "name": "nvme0", 00:27:00.700 "trtype": "tcp", 00:27:00.700 "traddr": "10.0.0.1", 00:27:00.700 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:00.700 "adrfam": "ipv4", 00:27:00.700 "trsvcid": "4420", 00:27:00.700 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:27:00.700 } 00:27:00.700 } 00:27:00.700 Got JSON-RPC error response 00:27:00.700 GoRPCClient: error on JSON-RPC call 00:27:00.700 13:37:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:00.700 13:37:17 -- common/autotest_common.sh@641 -- # es=1 00:27:00.700 13:37:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:00.700 13:37:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:00.700 13:37:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:00.700 13:37:17 -- host/auth.sh@121 -- # jq length 00:27:00.700 13:37:17 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.700 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.700 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.700 13:37:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.700 13:37:17 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:27:00.700 13:37:17 -- host/auth.sh@124 -- # get_main_ns_ip 00:27:00.700 13:37:17 -- nvmf/common.sh@717 -- # local ip 00:27:00.700 13:37:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:00.700 13:37:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:00.700 13:37:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.700 13:37:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.700 13:37:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:00.700 13:37:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.700 13:37:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:00.700 13:37:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:00.700 13:37:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:00.700 13:37:17 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:00.700 13:37:17 -- common/autotest_common.sh@638 -- # local es=0 00:27:00.700 13:37:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:00.700 13:37:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:00.700 13:37:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:00.700 13:37:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:00.700 13:37:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:00.700 13:37:17 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:00.700 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.700 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.700 2024/04/26 13:37:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:27:00.700 request: 00:27:00.700 { 00:27:00.700 "method": "bdev_nvme_attach_controller", 00:27:00.700 "params": { 00:27:00.700 "name": "nvme0", 00:27:00.700 "trtype": "tcp", 00:27:00.700 "traddr": "10.0.0.1", 00:27:00.700 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:00.700 "adrfam": "ipv4", 00:27:00.700 "trsvcid": "4420", 00:27:00.700 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:00.700 "dhchap_key": "key2" 00:27:00.700 } 00:27:00.700 } 00:27:00.700 Got JSON-RPC error response 00:27:00.700 GoRPCClient: error on JSON-RPC call 00:27:00.700 13:37:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:00.700 13:37:17 -- common/autotest_common.sh@641 -- # es=1 00:27:00.700 13:37:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:00.700 13:37:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:00.700 13:37:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:00.700 13:37:17 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.700 13:37:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.700 13:37:17 -- host/auth.sh@127 -- # jq length 00:27:00.700 13:37:17 -- common/autotest_common.sh@10 -- # set +x 00:27:00.700 13:37:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.700 13:37:18 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:27:00.700 13:37:18 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:27:00.700 13:37:18 -- host/auth.sh@130 -- # cleanup 00:27:00.700 13:37:18 -- host/auth.sh@24 -- # nvmftestfini 00:27:00.700 13:37:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:00.700 13:37:18 -- nvmf/common.sh@117 -- # sync 00:27:00.700 13:37:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:00.700 13:37:18 -- nvmf/common.sh@120 -- # set +e 00:27:00.700 13:37:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:00.700 13:37:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:00.700 rmmod nvme_tcp 00:27:00.700 rmmod nvme_fabrics 00:27:00.700 13:37:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:00.700 13:37:18 -- nvmf/common.sh@124 -- # set -e 00:27:00.700 13:37:18 -- nvmf/common.sh@125 -- # return 0 00:27:00.700 13:37:18 -- nvmf/common.sh@478 -- # '[' -n 83911 ']' 00:27:00.700 13:37:18 -- nvmf/common.sh@479 -- # killprocess 83911 00:27:00.700 13:37:18 -- common/autotest_common.sh@936 -- # '[' -z 83911 ']' 00:27:00.700 13:37:18 -- common/autotest_common.sh@940 -- # kill -0 83911 00:27:00.700 13:37:18 -- common/autotest_common.sh@941 -- # uname 00:27:00.700 13:37:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:00.700 13:37:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83911 00:27:00.700 13:37:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:00.700 13:37:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:00.700 13:37:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83911' 00:27:00.700 killing process with pid 83911 00:27:00.700 13:37:18 -- common/autotest_common.sh@955 -- # kill 83911 00:27:00.700 13:37:18 -- common/autotest_common.sh@960 -- # wait 83911 00:27:00.959 13:37:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:00.959 13:37:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:00.959 13:37:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:00.959 13:37:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:00.959 13:37:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:00.959 13:37:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.959 13:37:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:00.959 13:37:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:00.959 13:37:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:01.218 13:37:18 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:01.218 13:37:18 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:01.218 13:37:18 -- host/auth.sh@27 -- # clean_kernel_target 00:27:01.218 13:37:18 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:01.218 13:37:18 -- nvmf/common.sh@675 -- # echo 0 00:27:01.218 13:37:18 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.218 13:37:18 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:01.218 13:37:18 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:01.218 13:37:18 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.218 13:37:18 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:01.218 13:37:18 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:01.218 13:37:18 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:01.784 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:01.784 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:02.043 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:02.043 13:37:19 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2ae /tmp/spdk.key-null.lO9 /tmp/spdk.key-sha256.1Kg /tmp/spdk.key-sha384.kE7 /tmp/spdk.key-sha512.LRz /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:27:02.043 13:37:19 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:02.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:02.301 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:02.301 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:02.560 00:27:02.560 real 0m39.346s 00:27:02.560 user 0m35.377s 00:27:02.560 sys 0m3.935s 00:27:02.560 13:37:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:02.560 13:37:19 -- common/autotest_common.sh@10 -- # set +x 00:27:02.560 ************************************ 00:27:02.560 END TEST nvmf_auth 00:27:02.560 ************************************ 00:27:02.560 13:37:19 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:27:02.560 13:37:19 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:02.560 13:37:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:02.560 13:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:02.560 13:37:19 -- common/autotest_common.sh@10 -- # set +x 00:27:02.560 ************************************ 00:27:02.560 START TEST nvmf_digest 00:27:02.560 ************************************ 00:27:02.560 13:37:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:02.560 * Looking for test storage... 00:27:02.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:02.560 13:37:19 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:02.560 13:37:19 -- nvmf/common.sh@7 -- # uname -s 00:27:02.560 13:37:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.560 13:37:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.560 13:37:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.560 13:37:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.560 13:37:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.560 13:37:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.560 13:37:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.560 13:37:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.560 13:37:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.560 13:37:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.560 13:37:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:27:02.560 13:37:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:27:02.560 13:37:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.560 13:37:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.560 13:37:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:02.560 13:37:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.560 13:37:20 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:02.560 13:37:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.560 13:37:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.560 13:37:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.560 13:37:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.560 13:37:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.560 13:37:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.560 13:37:20 -- paths/export.sh@5 -- # export PATH 00:27:02.560 13:37:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.560 13:37:20 -- nvmf/common.sh@47 -- # : 0 00:27:02.560 13:37:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.560 13:37:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.560 13:37:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.560 13:37:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.560 13:37:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.560 13:37:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.560 13:37:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.560 13:37:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.560 13:37:20 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:02.819 13:37:20 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:02.819 13:37:20 -- host/digest.sh@16 -- # runtime=2 00:27:02.819 13:37:20 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:02.819 13:37:20 -- host/digest.sh@138 -- # nvmftestinit 00:27:02.819 13:37:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:02.819 13:37:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.819 13:37:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:02.819 13:37:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:02.819 13:37:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:02.819 13:37:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.819 13:37:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.819 13:37:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.819 13:37:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:02.819 13:37:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:02.819 13:37:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:02.819 13:37:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:02.819 13:37:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:02.819 13:37:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:02.819 13:37:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.819 13:37:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.819 13:37:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:02.819 13:37:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:02.819 13:37:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:02.819 13:37:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:02.819 13:37:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:02.819 13:37:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.819 13:37:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:02.819 13:37:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:02.819 13:37:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:02.819 13:37:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:02.819 13:37:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:02.819 13:37:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:02.819 Cannot find device "nvmf_tgt_br" 00:27:02.819 13:37:20 -- nvmf/common.sh@155 -- # true 00:27:02.819 13:37:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:02.819 Cannot find device "nvmf_tgt_br2" 00:27:02.819 13:37:20 -- nvmf/common.sh@156 -- # true 00:27:02.819 13:37:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:02.819 13:37:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:02.819 Cannot find device "nvmf_tgt_br" 00:27:02.819 13:37:20 -- nvmf/common.sh@158 -- # true 00:27:02.819 13:37:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:02.819 Cannot find device "nvmf_tgt_br2" 00:27:02.819 13:37:20 -- nvmf/common.sh@159 -- # true 00:27:02.819 13:37:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:02.819 13:37:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:02.819 13:37:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:02.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:02.819 13:37:20 -- nvmf/common.sh@162 -- # true 00:27:02.819 13:37:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:02.819 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:02.819 13:37:20 -- nvmf/common.sh@163 -- # true 00:27:02.819 13:37:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:02.819 13:37:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:02.819 13:37:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:02.819 13:37:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:02.819 13:37:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:02.819 13:37:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:02.819 13:37:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:02.819 13:37:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:02.819 13:37:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:02.819 13:37:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:02.819 13:37:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:02.819 13:37:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:02.819 13:37:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:02.819 13:37:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:02.819 13:37:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:03.078 13:37:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:03.078 13:37:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:03.078 13:37:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:03.078 13:37:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:03.078 13:37:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:03.078 13:37:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:03.078 13:37:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:03.078 13:37:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:03.078 13:37:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:03.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:27:03.078 00:27:03.078 --- 10.0.0.2 ping statistics --- 00:27:03.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.078 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:27:03.078 13:37:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:03.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:03.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:27:03.078 00:27:03.078 --- 10.0.0.3 ping statistics --- 00:27:03.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.078 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:03.078 13:37:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:03.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:27:03.078 00:27:03.078 --- 10.0.0.1 ping statistics --- 00:27:03.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.078 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:27:03.078 13:37:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.078 13:37:20 -- nvmf/common.sh@422 -- # return 0 00:27:03.078 13:37:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:03.078 13:37:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.078 13:37:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:03.078 13:37:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:03.078 13:37:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.078 13:37:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:03.078 13:37:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:03.078 13:37:20 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:03.078 13:37:20 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:03.078 13:37:20 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:03.078 13:37:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.078 13:37:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.078 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:27:03.078 ************************************ 00:27:03.078 START TEST nvmf_digest_clean 00:27:03.078 ************************************ 00:27:03.078 13:37:20 -- common/autotest_common.sh@1111 -- # run_digest 00:27:03.078 13:37:20 -- host/digest.sh@120 -- # local dsa_initiator 00:27:03.078 13:37:20 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:03.078 13:37:20 -- host/digest.sh@121 -- # dsa_initiator=false 00:27:03.078 13:37:20 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:03.078 13:37:20 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:03.078 13:37:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:03.078 13:37:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:03.078 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:27:03.078 13:37:20 -- nvmf/common.sh@470 -- # nvmfpid=85545 00:27:03.078 13:37:20 -- nvmf/common.sh@471 -- # waitforlisten 85545 00:27:03.078 13:37:20 -- common/autotest_common.sh@817 -- # '[' -z 85545 ']' 00:27:03.078 13:37:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:03.078 13:37:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.078 13:37:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:03.078 13:37:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.078 13:37:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:03.078 13:37:20 -- common/autotest_common.sh@10 -- # set +x 00:27:03.078 [2024-04-26 13:37:20.513897] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:03.078 [2024-04-26 13:37:20.514019] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.337 [2024-04-26 13:37:20.655937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.337 [2024-04-26 13:37:20.779949] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.337 [2024-04-26 13:37:20.780030] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.337 [2024-04-26 13:37:20.780046] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.337 [2024-04-26 13:37:20.780057] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.337 [2024-04-26 13:37:20.780066] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.337 [2024-04-26 13:37:20.780105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.271 13:37:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:04.271 13:37:21 -- common/autotest_common.sh@850 -- # return 0 00:27:04.271 13:37:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:04.271 13:37:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:04.271 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:27:04.271 13:37:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.271 13:37:21 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:04.271 13:37:21 -- host/digest.sh@126 -- # common_target_config 00:27:04.271 13:37:21 -- host/digest.sh@43 -- # rpc_cmd 00:27:04.271 13:37:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:04.271 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:27:04.271 null0 00:27:04.271 [2024-04-26 13:37:21.598937] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.271 [2024-04-26 13:37:21.623077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.271 13:37:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:04.271 13:37:21 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:04.271 13:37:21 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:04.271 13:37:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:04.271 13:37:21 -- host/digest.sh@80 -- # rw=randread 00:27:04.271 13:37:21 -- host/digest.sh@80 -- # bs=4096 00:27:04.272 13:37:21 -- host/digest.sh@80 -- # qd=128 00:27:04.272 13:37:21 -- host/digest.sh@80 -- # scan_dsa=false 00:27:04.272 13:37:21 -- host/digest.sh@83 -- # bperfpid=85595 00:27:04.272 13:37:21 -- host/digest.sh@84 -- # waitforlisten 85595 /var/tmp/bperf.sock 00:27:04.272 13:37:21 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:04.272 13:37:21 -- common/autotest_common.sh@817 -- # '[' -z 85595 ']' 00:27:04.272 13:37:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:04.272 13:37:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:04.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:04.272 13:37:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:04.272 13:37:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:04.272 13:37:21 -- common/autotest_common.sh@10 -- # set +x 00:27:04.272 [2024-04-26 13:37:21.688980] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:04.272 [2024-04-26 13:37:21.689084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85595 ] 00:27:04.530 [2024-04-26 13:37:21.829508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.530 [2024-04-26 13:37:21.947282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.516 13:37:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:05.516 13:37:22 -- common/autotest_common.sh@850 -- # return 0 00:27:05.516 13:37:22 -- host/digest.sh@86 -- # false 00:27:05.516 13:37:22 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:05.516 13:37:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:05.774 13:37:23 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.774 13:37:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.339 nvme0n1 00:27:06.339 13:37:23 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:06.339 13:37:23 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:06.339 Running I/O for 2 seconds... 00:27:08.871 00:27:08.871 Latency(us) 00:27:08.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.871 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:08.871 nvme0n1 : 2.00 18503.27 72.28 0.00 0.00 6910.75 3768.32 11558.17 00:27:08.871 =================================================================================================================== 00:27:08.871 Total : 18503.27 72.28 0.00 0.00 6910.75 3768.32 11558.17 00:27:08.871 0 00:27:08.871 13:37:25 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:08.871 13:37:25 -- host/digest.sh@93 -- # get_accel_stats 00:27:08.871 13:37:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:08.871 13:37:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:08.871 13:37:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:08.871 | select(.opcode=="crc32c") 00:27:08.871 | "\(.module_name) \(.executed)"' 00:27:08.871 13:37:26 -- host/digest.sh@94 -- # false 00:27:08.871 13:37:26 -- host/digest.sh@94 -- # exp_module=software 00:27:08.871 13:37:26 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:08.871 13:37:26 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:08.871 13:37:26 -- host/digest.sh@98 -- # killprocess 85595 00:27:08.871 13:37:26 -- common/autotest_common.sh@936 -- # '[' -z 85595 ']' 00:27:08.871 13:37:26 -- common/autotest_common.sh@940 -- # kill -0 85595 00:27:08.871 13:37:26 -- common/autotest_common.sh@941 -- # uname 00:27:08.871 13:37:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:08.871 13:37:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85595 00:27:08.871 killing process with pid 85595 00:27:08.871 Received shutdown signal, test time was about 2.000000 seconds 00:27:08.871 00:27:08.871 Latency(us) 00:27:08.871 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.871 =================================================================================================================== 00:27:08.871 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:08.871 13:37:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:08.871 13:37:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:08.871 13:37:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85595' 00:27:08.871 13:37:26 -- common/autotest_common.sh@955 -- # kill 85595 00:27:08.871 13:37:26 -- common/autotest_common.sh@960 -- # wait 85595 00:27:09.130 13:37:26 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:09.130 13:37:26 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:09.130 13:37:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:09.130 13:37:26 -- host/digest.sh@80 -- # rw=randread 00:27:09.130 13:37:26 -- host/digest.sh@80 -- # bs=131072 00:27:09.130 13:37:26 -- host/digest.sh@80 -- # qd=16 00:27:09.130 13:37:26 -- host/digest.sh@80 -- # scan_dsa=false 00:27:09.130 13:37:26 -- host/digest.sh@83 -- # bperfpid=85691 00:27:09.130 13:37:26 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:09.130 13:37:26 -- host/digest.sh@84 -- # waitforlisten 85691 /var/tmp/bperf.sock 00:27:09.130 13:37:26 -- common/autotest_common.sh@817 -- # '[' -z 85691 ']' 00:27:09.130 13:37:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.130 13:37:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:09.130 13:37:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.130 13:37:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:09.130 13:37:26 -- common/autotest_common.sh@10 -- # set +x 00:27:09.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:09.130 Zero copy mechanism will not be used. 00:27:09.130 [2024-04-26 13:37:26.564703] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:09.130 [2024-04-26 13:37:26.564855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85691 ] 00:27:09.389 [2024-04-26 13:37:26.704029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.648 [2024-04-26 13:37:26.867803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.583 13:37:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:10.583 13:37:27 -- common/autotest_common.sh@850 -- # return 0 00:27:10.583 13:37:27 -- host/digest.sh@86 -- # false 00:27:10.583 13:37:27 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:10.583 13:37:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:10.840 13:37:28 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.841 13:37:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.407 nvme0n1 00:27:11.407 13:37:28 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:11.407 13:37:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:11.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.407 Zero copy mechanism will not be used. 00:27:11.407 Running I/O for 2 seconds... 00:27:13.357 00:27:13.357 Latency(us) 00:27:13.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.357 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:13.357 nvme0n1 : 2.00 5981.74 747.72 0.00 0.00 2671.01 722.39 10009.13 00:27:13.357 =================================================================================================================== 00:27:13.357 Total : 5981.74 747.72 0.00 0.00 2671.01 722.39 10009.13 00:27:13.357 0 00:27:13.357 13:37:30 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:13.357 13:37:30 -- host/digest.sh@93 -- # get_accel_stats 00:27:13.357 13:37:30 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:13.357 13:37:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:13.357 13:37:30 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:13.357 | select(.opcode=="crc32c") 00:27:13.357 | "\(.module_name) \(.executed)"' 00:27:13.615 13:37:31 -- host/digest.sh@94 -- # false 00:27:13.615 13:37:31 -- host/digest.sh@94 -- # exp_module=software 00:27:13.615 13:37:31 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:13.615 13:37:31 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:13.615 13:37:31 -- host/digest.sh@98 -- # killprocess 85691 00:27:13.615 13:37:31 -- common/autotest_common.sh@936 -- # '[' -z 85691 ']' 00:27:13.615 13:37:31 -- common/autotest_common.sh@940 -- # kill -0 85691 00:27:13.615 13:37:31 -- common/autotest_common.sh@941 -- # uname 00:27:13.615 13:37:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:13.615 13:37:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85691 00:27:13.615 13:37:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:13.615 13:37:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:13.615 13:37:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85691' 00:27:13.873 killing process with pid 85691 00:27:13.873 13:37:31 -- common/autotest_common.sh@955 -- # kill 85691 00:27:13.873 Received shutdown signal, test time was about 2.000000 seconds 00:27:13.873 00:27:13.873 Latency(us) 00:27:13.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.873 =================================================================================================================== 00:27:13.873 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:13.873 13:37:31 -- common/autotest_common.sh@960 -- # wait 85691 00:27:14.132 13:37:31 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:14.132 13:37:31 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:14.132 13:37:31 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:14.132 13:37:31 -- host/digest.sh@80 -- # rw=randwrite 00:27:14.132 13:37:31 -- host/digest.sh@80 -- # bs=4096 00:27:14.132 13:37:31 -- host/digest.sh@80 -- # qd=128 00:27:14.132 13:37:31 -- host/digest.sh@80 -- # scan_dsa=false 00:27:14.132 13:37:31 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:14.132 13:37:31 -- host/digest.sh@83 -- # bperfpid=85782 00:27:14.132 13:37:31 -- host/digest.sh@84 -- # waitforlisten 85782 /var/tmp/bperf.sock 00:27:14.132 13:37:31 -- common/autotest_common.sh@817 -- # '[' -z 85782 ']' 00:27:14.132 13:37:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.132 13:37:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:14.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.132 13:37:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.132 13:37:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:14.132 13:37:31 -- common/autotest_common.sh@10 -- # set +x 00:27:14.132 [2024-04-26 13:37:31.517686] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:14.132 [2024-04-26 13:37:31.517839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85782 ] 00:27:14.389 [2024-04-26 13:37:31.669642] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.389 [2024-04-26 13:37:31.819913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.324 13:37:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:15.324 13:37:32 -- common/autotest_common.sh@850 -- # return 0 00:27:15.324 13:37:32 -- host/digest.sh@86 -- # false 00:27:15.324 13:37:32 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:15.324 13:37:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:15.582 13:37:32 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:15.582 13:37:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.150 nvme0n1 00:27:16.150 13:37:33 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:16.150 13:37:33 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.150 Running I/O for 2 seconds... 00:27:18.053 00:27:18.054 Latency(us) 00:27:18.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.054 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.054 nvme0n1 : 2.01 20487.36 80.03 0.00 0.00 6240.21 2561.86 21328.99 00:27:18.054 =================================================================================================================== 00:27:18.054 Total : 20487.36 80.03 0.00 0.00 6240.21 2561.86 21328.99 00:27:18.054 0 00:27:18.054 13:37:35 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:18.054 13:37:35 -- host/digest.sh@93 -- # get_accel_stats 00:27:18.054 13:37:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:18.054 13:37:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:18.054 13:37:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:18.054 | select(.opcode=="crc32c") 00:27:18.054 | "\(.module_name) \(.executed)"' 00:27:18.312 13:37:35 -- host/digest.sh@94 -- # false 00:27:18.312 13:37:35 -- host/digest.sh@94 -- # exp_module=software 00:27:18.312 13:37:35 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:18.312 13:37:35 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:18.312 13:37:35 -- host/digest.sh@98 -- # killprocess 85782 00:27:18.312 13:37:35 -- common/autotest_common.sh@936 -- # '[' -z 85782 ']' 00:27:18.312 13:37:35 -- common/autotest_common.sh@940 -- # kill -0 85782 00:27:18.312 13:37:35 -- common/autotest_common.sh@941 -- # uname 00:27:18.312 13:37:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:18.312 13:37:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85782 00:27:18.571 13:37:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:18.571 13:37:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:18.571 killing process with pid 85782 00:27:18.571 13:37:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85782' 00:27:18.571 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.571 00:27:18.571 Latency(us) 00:27:18.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.571 =================================================================================================================== 00:27:18.571 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.571 13:37:35 -- common/autotest_common.sh@955 -- # kill 85782 00:27:18.571 13:37:35 -- common/autotest_common.sh@960 -- # wait 85782 00:27:18.862 13:37:36 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:18.862 13:37:36 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:18.862 13:37:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:18.862 13:37:36 -- host/digest.sh@80 -- # rw=randwrite 00:27:18.862 13:37:36 -- host/digest.sh@80 -- # bs=131072 00:27:18.862 13:37:36 -- host/digest.sh@80 -- # qd=16 00:27:18.862 13:37:36 -- host/digest.sh@80 -- # scan_dsa=false 00:27:18.862 13:37:36 -- host/digest.sh@83 -- # bperfpid=85878 00:27:18.862 13:37:36 -- host/digest.sh@84 -- # waitforlisten 85878 /var/tmp/bperf.sock 00:27:18.862 13:37:36 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:18.862 13:37:36 -- common/autotest_common.sh@817 -- # '[' -z 85878 ']' 00:27:18.862 13:37:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.862 13:37:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:18.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.862 13:37:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.862 13:37:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:18.862 13:37:36 -- common/autotest_common.sh@10 -- # set +x 00:27:18.862 [2024-04-26 13:37:36.233834] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:18.862 [2024-04-26 13:37:36.233940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85878 ] 00:27:18.862 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:18.862 Zero copy mechanism will not be used. 00:27:19.119 [2024-04-26 13:37:36.373722] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.119 [2024-04-26 13:37:36.551261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.051 13:37:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:20.051 13:37:37 -- common/autotest_common.sh@850 -- # return 0 00:27:20.052 13:37:37 -- host/digest.sh@86 -- # false 00:27:20.052 13:37:37 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:20.052 13:37:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:20.311 13:37:37 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:20.311 13:37:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:20.569 nvme0n1 00:27:20.569 13:37:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:20.569 13:37:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:20.828 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:20.828 Zero copy mechanism will not be used. 00:27:20.828 Running I/O for 2 seconds... 00:27:22.748 00:27:22.748 Latency(us) 00:27:22.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.748 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:22.748 nvme0n1 : 2.00 6677.82 834.73 0.00 0.00 2390.28 1936.29 9949.56 00:27:22.748 =================================================================================================================== 00:27:22.748 Total : 6677.82 834.73 0.00 0.00 2390.28 1936.29 9949.56 00:27:22.748 0 00:27:22.748 13:37:40 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:22.748 13:37:40 -- host/digest.sh@93 -- # get_accel_stats 00:27:22.748 13:37:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:22.748 13:37:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:22.748 13:37:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:22.748 | select(.opcode=="crc32c") 00:27:22.748 | "\(.module_name) \(.executed)"' 00:27:23.008 13:37:40 -- host/digest.sh@94 -- # false 00:27:23.008 13:37:40 -- host/digest.sh@94 -- # exp_module=software 00:27:23.008 13:37:40 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:23.008 13:37:40 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:23.008 13:37:40 -- host/digest.sh@98 -- # killprocess 85878 00:27:23.008 13:37:40 -- common/autotest_common.sh@936 -- # '[' -z 85878 ']' 00:27:23.008 13:37:40 -- common/autotest_common.sh@940 -- # kill -0 85878 00:27:23.008 13:37:40 -- common/autotest_common.sh@941 -- # uname 00:27:23.008 13:37:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:23.008 13:37:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85878 00:27:23.008 13:37:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:23.008 13:37:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:23.008 killing process with pid 85878 00:27:23.008 13:37:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85878' 00:27:23.008 Received shutdown signal, test time was about 2.000000 seconds 00:27:23.008 00:27:23.008 Latency(us) 00:27:23.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.008 =================================================================================================================== 00:27:23.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:23.008 13:37:40 -- common/autotest_common.sh@955 -- # kill 85878 00:27:23.008 13:37:40 -- common/autotest_common.sh@960 -- # wait 85878 00:27:23.575 13:37:40 -- host/digest.sh@132 -- # killprocess 85545 00:27:23.575 13:37:40 -- common/autotest_common.sh@936 -- # '[' -z 85545 ']' 00:27:23.575 13:37:40 -- common/autotest_common.sh@940 -- # kill -0 85545 00:27:23.575 13:37:40 -- common/autotest_common.sh@941 -- # uname 00:27:23.575 13:37:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:23.575 13:37:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85545 00:27:23.575 13:37:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:23.575 13:37:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:23.575 killing process with pid 85545 00:27:23.575 13:37:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85545' 00:27:23.575 13:37:40 -- common/autotest_common.sh@955 -- # kill 85545 00:27:23.575 13:37:40 -- common/autotest_common.sh@960 -- # wait 85545 00:27:23.833 00:27:23.833 real 0m20.754s 00:27:23.833 user 0m40.008s 00:27:23.833 sys 0m5.217s 00:27:23.833 13:37:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:23.833 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:23.833 ************************************ 00:27:23.833 END TEST nvmf_digest_clean 00:27:23.833 ************************************ 00:27:23.833 13:37:41 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:23.833 13:37:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:23.833 13:37:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:23.833 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:24.092 ************************************ 00:27:24.092 START TEST nvmf_digest_error 00:27:24.092 ************************************ 00:27:24.092 13:37:41 -- common/autotest_common.sh@1111 -- # run_digest_error 00:27:24.092 13:37:41 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:24.092 13:37:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:24.092 13:37:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:24.092 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:24.092 13:37:41 -- nvmf/common.sh@470 -- # nvmfpid=86001 00:27:24.092 13:37:41 -- nvmf/common.sh@471 -- # waitforlisten 86001 00:27:24.092 13:37:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:24.092 13:37:41 -- common/autotest_common.sh@817 -- # '[' -z 86001 ']' 00:27:24.092 13:37:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.092 13:37:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:24.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.092 13:37:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.092 13:37:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:24.092 13:37:41 -- common/autotest_common.sh@10 -- # set +x 00:27:24.092 [2024-04-26 13:37:41.400567] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:24.092 [2024-04-26 13:37:41.400675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.350 [2024-04-26 13:37:41.540925] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.350 [2024-04-26 13:37:41.706325] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.350 [2024-04-26 13:37:41.706409] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.350 [2024-04-26 13:37:41.706421] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.350 [2024-04-26 13:37:41.706430] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.350 [2024-04-26 13:37:41.706437] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.350 [2024-04-26 13:37:41.706477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.285 13:37:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:25.285 13:37:42 -- common/autotest_common.sh@850 -- # return 0 00:27:25.285 13:37:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:25.285 13:37:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:25.285 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:27:25.285 13:37:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.285 13:37:42 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:25.285 13:37:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.285 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:27:25.285 [2024-04-26 13:37:42.495110] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:25.285 13:37:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.285 13:37:42 -- host/digest.sh@105 -- # common_target_config 00:27:25.285 13:37:42 -- host/digest.sh@43 -- # rpc_cmd 00:27:25.285 13:37:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.285 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:27:25.285 null0 00:27:25.285 [2024-04-26 13:37:42.650816] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.285 [2024-04-26 13:37:42.674973] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.285 13:37:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.285 13:37:42 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:25.285 13:37:42 -- host/digest.sh@54 -- # local rw bs qd 00:27:25.285 13:37:42 -- host/digest.sh@56 -- # rw=randread 00:27:25.285 13:37:42 -- host/digest.sh@56 -- # bs=4096 00:27:25.285 13:37:42 -- host/digest.sh@56 -- # qd=128 00:27:25.285 13:37:42 -- host/digest.sh@58 -- # bperfpid=86049 00:27:25.285 13:37:42 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:25.285 13:37:42 -- host/digest.sh@60 -- # waitforlisten 86049 /var/tmp/bperf.sock 00:27:25.285 13:37:42 -- common/autotest_common.sh@817 -- # '[' -z 86049 ']' 00:27:25.285 13:37:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:25.285 13:37:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:25.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:25.285 13:37:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:25.285 13:37:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:25.285 13:37:42 -- common/autotest_common.sh@10 -- # set +x 00:27:25.543 [2024-04-26 13:37:42.760557] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:25.543 [2024-04-26 13:37:42.760707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86049 ] 00:27:25.543 [2024-04-26 13:37:42.911278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.802 [2024-04-26 13:37:43.042888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.366 13:37:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:26.366 13:37:43 -- common/autotest_common.sh@850 -- # return 0 00:27:26.366 13:37:43 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:26.366 13:37:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:26.624 13:37:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:26.624 13:37:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.624 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:27:26.624 13:37:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.624 13:37:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.624 13:37:44 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.190 nvme0n1 00:27:27.190 13:37:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:27.190 13:37:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:27.190 13:37:44 -- common/autotest_common.sh@10 -- # set +x 00:27:27.190 13:37:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:27.191 13:37:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:27.191 13:37:44 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.191 Running I/O for 2 seconds... 00:27:27.191 [2024-04-26 13:37:44.529881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.529942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.529958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.191 [2024-04-26 13:37:44.543417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.543458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.543472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.191 [2024-04-26 13:37:44.558154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.558198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.558213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.191 [2024-04-26 13:37:44.569889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.569928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.569944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.191 [2024-04-26 13:37:44.584334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.584374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.584388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.191 [2024-04-26 13:37:44.600159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.600197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.600212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.191 [2024-04-26 13:37:44.615652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.615690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.615705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.191 [2024-04-26 13:37:44.627044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.191 [2024-04-26 13:37:44.627079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.191 [2024-04-26 13:37:44.627093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.643062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.643105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.643121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.658238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.658292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.658307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.672424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.672462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.672477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.683976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.684012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.684026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.696914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.696955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.696970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.711537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.711580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.711594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.727344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.727386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.727400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.742223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.742263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.742278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.753524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.753561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.753575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.769213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.769250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.769265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.784403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.784443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.784456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.797158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.797199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.797215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.813136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.813176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.813189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.827568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.827607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.450 [2024-04-26 13:37:44.827621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.450 [2024-04-26 13:37:44.842458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.450 [2024-04-26 13:37:44.842497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.451 [2024-04-26 13:37:44.842511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.451 [2024-04-26 13:37:44.856833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.451 [2024-04-26 13:37:44.856872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.451 [2024-04-26 13:37:44.856887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.451 [2024-04-26 13:37:44.869432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.451 [2024-04-26 13:37:44.869472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.451 [2024-04-26 13:37:44.869487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.451 [2024-04-26 13:37:44.884395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.451 [2024-04-26 13:37:44.884457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.451 [2024-04-26 13:37:44.884473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.709 [2024-04-26 13:37:44.898178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.709 [2024-04-26 13:37:44.898216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.709 [2024-04-26 13:37:44.898231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.709 [2024-04-26 13:37:44.914759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.709 [2024-04-26 13:37:44.914809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.709 [2024-04-26 13:37:44.914825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.709 [2024-04-26 13:37:44.926914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.709 [2024-04-26 13:37:44.926965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.709 [2024-04-26 13:37:44.926980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.709 [2024-04-26 13:37:44.941005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.709 [2024-04-26 13:37:44.941042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.709 [2024-04-26 13:37:44.941056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.709 [2024-04-26 13:37:44.953149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.709 [2024-04-26 13:37:44.953185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:44.953200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:44.966491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:44.966552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:44.966568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:44.983047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:44.983094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:44.983110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:44.997879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:44.997920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:44.997934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.009712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.009751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.009765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.024933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.024975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.024989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.039843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.039887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.039902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.052980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.053023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.053038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.067579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.067631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.067646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.082066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.082111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.082126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.096616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.096659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.096674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.108663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.108707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:34 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.108723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.121494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.121537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:25530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.121551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.136833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.136874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.136889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.710 [2024-04-26 13:37:45.152828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.710 [2024-04-26 13:37:45.152870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.710 [2024-04-26 13:37:45.152885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.167092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.167133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.167148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.181065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.181102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.181116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.192957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.192996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.193010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.207976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.208033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.208048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.222454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.222502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.222522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.235997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.236035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.236049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.249465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.249503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.249517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.265064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.265118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.265135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.278629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.278678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.278693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.292877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.292918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.292932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.306520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.306567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.306581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.320243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.320298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.320313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.334344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.334394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.334409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.346377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.346413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.346430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.361126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.361162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.361176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.374618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.374656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.374670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.390964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.391017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.391031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.969 [2024-04-26 13:37:45.405005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:27.969 [2024-04-26 13:37:45.405041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.969 [2024-04-26 13:37:45.405055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.228 [2024-04-26 13:37:45.418537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.228 [2024-04-26 13:37:45.418575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.228 [2024-04-26 13:37:45.418590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.228 [2024-04-26 13:37:45.431462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.228 [2024-04-26 13:37:45.431500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.228 [2024-04-26 13:37:45.431514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.228 [2024-04-26 13:37:45.445641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.228 [2024-04-26 13:37:45.445678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.228 [2024-04-26 13:37:45.445692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.228 [2024-04-26 13:37:45.460002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.228 [2024-04-26 13:37:45.460039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.228 [2024-04-26 13:37:45.460053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.228 [2024-04-26 13:37:45.472376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.228 [2024-04-26 13:37:45.472415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.228 [2024-04-26 13:37:45.472429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.228 [2024-04-26 13:37:45.488050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.228 [2024-04-26 13:37:45.488100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.228 [2024-04-26 13:37:45.488114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.502265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.502301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.502315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.515977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.516014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.516028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.531457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.531494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.531508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.545471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.545508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.545523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.559921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.559958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.559972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.571641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.571678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.571692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.586535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.586576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.586590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.600684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.600720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.600734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.612731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.612765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.612793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.628386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.628425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.628439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.641784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.641844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.641859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.656597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.656634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.656647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.229 [2024-04-26 13:37:45.668909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.229 [2024-04-26 13:37:45.668945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.229 [2024-04-26 13:37:45.668958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.682942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.487 [2024-04-26 13:37:45.683007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.487 [2024-04-26 13:37:45.683021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.697067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.487 [2024-04-26 13:37:45.697118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.487 [2024-04-26 13:37:45.697132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.711905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.487 [2024-04-26 13:37:45.711942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.487 [2024-04-26 13:37:45.711955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.725969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.487 [2024-04-26 13:37:45.726006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.487 [2024-04-26 13:37:45.726020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.739437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.487 [2024-04-26 13:37:45.739473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.487 [2024-04-26 13:37:45.739487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.754569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.487 [2024-04-26 13:37:45.754605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.487 [2024-04-26 13:37:45.754619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.771454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.487 [2024-04-26 13:37:45.771492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.487 [2024-04-26 13:37:45.771506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.487 [2024-04-26 13:37:45.784459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.784495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.784509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.800583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.800620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.800634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.813567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.813603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.830109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.830145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.830159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.846317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.846353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.846375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.859242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.859280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.859294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.874627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.874663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.874676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.886916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.886953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.886967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.900653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.900689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.900703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.914904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.914941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.914954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.488 [2024-04-26 13:37:45.929988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.488 [2024-04-26 13:37:45.930025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.488 [2024-04-26 13:37:45.930039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:45.945227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:45.945263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:45.945277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:45.957391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:45.957428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:45.957441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:45.972349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:45.972386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:45.972401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:45.985343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:45.985380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:45.985393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:45.999684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:45.999720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:45.999734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:46.013951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:46.013987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:46.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:46.028377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:46.028414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:46.028428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:46.046673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:46.046710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.747 [2024-04-26 13:37:46.046724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.747 [2024-04-26 13:37:46.066847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.747 [2024-04-26 13:37:46.066886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.066899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.080661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.080698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.080711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.095902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.095938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.095952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.108791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.108840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.108855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.123082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.123117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.123130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.137938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.137974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.137987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.151328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.151366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.151380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.165346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.165385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.165399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.177715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.177758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.177773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:28.748 [2024-04-26 13:37:46.191406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:28.748 [2024-04-26 13:37:46.191447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.748 [2024-04-26 13:37:46.191462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.207153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.207192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.207207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.221644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.221684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.221698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.235352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.235392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.235406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.250164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.250201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.250215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.264413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.264452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.264466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.277098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.277136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.277150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.291645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.291684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.291699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.306479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.306524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.306538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.321488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.321530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.321545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.007 [2024-04-26 13:37:46.335241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.007 [2024-04-26 13:37:46.335278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.007 [2024-04-26 13:37:46.335293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.008 [2024-04-26 13:37:46.348959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.008 [2024-04-26 13:37:46.348995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.008 [2024-04-26 13:37:46.349009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.008 [2024-04-26 13:37:46.362716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.008 [2024-04-26 13:37:46.362763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.008 [2024-04-26 13:37:46.362788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.008 [2024-04-26 13:37:46.377454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.008 [2024-04-26 13:37:46.377490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.008 [2024-04-26 13:37:46.377504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.008 [2024-04-26 13:37:46.390773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.008 [2024-04-26 13:37:46.390823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.008 [2024-04-26 13:37:46.390837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.008 [2024-04-26 13:37:46.405055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.008 [2024-04-26 13:37:46.405100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.008 [2024-04-26 13:37:46.405115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.008 [2024-04-26 13:37:46.419136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.008 [2024-04-26 13:37:46.419173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.008 [2024-04-26 13:37:46.419187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.008 [2024-04-26 13:37:46.436919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.008 [2024-04-26 13:37:46.436954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.008 [2024-04-26 13:37:46.436968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.267 [2024-04-26 13:37:46.455686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.267 [2024-04-26 13:37:46.455723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.267 [2024-04-26 13:37:46.455737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.267 [2024-04-26 13:37:46.471707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.267 [2024-04-26 13:37:46.471747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.267 [2024-04-26 13:37:46.471760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.267 [2024-04-26 13:37:46.487208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.267 [2024-04-26 13:37:46.487245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.267 [2024-04-26 13:37:46.487258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.267 [2024-04-26 13:37:46.506006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5e3600) 00:27:29.267 [2024-04-26 13:37:46.506043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.267 [2024-04-26 13:37:46.506057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.267 00:27:29.267 Latency(us) 00:27:29.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.267 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:29.267 nvme0n1 : 2.01 17781.99 69.46 0.00 0.00 7189.93 3485.32 22282.24 00:27:29.267 =================================================================================================================== 00:27:29.267 Total : 17781.99 69.46 0.00 0.00 7189.93 3485.32 22282.24 00:27:29.267 0 00:27:29.267 13:37:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:29.267 13:37:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:29.267 13:37:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:29.267 13:37:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:29.267 | .driver_specific 00:27:29.267 | .nvme_error 00:27:29.267 | .status_code 00:27:29.267 | .command_transient_transport_error' 00:27:29.527 13:37:46 -- host/digest.sh@71 -- # (( 139 > 0 )) 00:27:29.527 13:37:46 -- host/digest.sh@73 -- # killprocess 86049 00:27:29.527 13:37:46 -- common/autotest_common.sh@936 -- # '[' -z 86049 ']' 00:27:29.527 13:37:46 -- common/autotest_common.sh@940 -- # kill -0 86049 00:27:29.527 13:37:46 -- common/autotest_common.sh@941 -- # uname 00:27:29.527 13:37:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:29.527 13:37:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86049 00:27:29.527 13:37:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:29.527 13:37:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:29.527 killing process with pid 86049 00:27:29.527 13:37:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86049' 00:27:29.527 13:37:46 -- common/autotest_common.sh@955 -- # kill 86049 00:27:29.527 Received shutdown signal, test time was about 2.000000 seconds 00:27:29.527 00:27:29.527 Latency(us) 00:27:29.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.527 =================================================================================================================== 00:27:29.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.527 13:37:46 -- common/autotest_common.sh@960 -- # wait 86049 00:27:29.787 13:37:47 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:29.787 13:37:47 -- host/digest.sh@54 -- # local rw bs qd 00:27:29.787 13:37:47 -- host/digest.sh@56 -- # rw=randread 00:27:29.787 13:37:47 -- host/digest.sh@56 -- # bs=131072 00:27:29.787 13:37:47 -- host/digest.sh@56 -- # qd=16 00:27:29.787 13:37:47 -- host/digest.sh@58 -- # bperfpid=86141 00:27:29.787 13:37:47 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:29.787 13:37:47 -- host/digest.sh@60 -- # waitforlisten 86141 /var/tmp/bperf.sock 00:27:29.787 13:37:47 -- common/autotest_common.sh@817 -- # '[' -z 86141 ']' 00:27:29.787 13:37:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.787 13:37:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:29.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.787 13:37:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.787 13:37:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:29.787 13:37:47 -- common/autotest_common.sh@10 -- # set +x 00:27:29.787 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.787 Zero copy mechanism will not be used. 00:27:29.787 [2024-04-26 13:37:47.233746] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:29.787 [2024-04-26 13:37:47.233871] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86141 ] 00:27:30.058 [2024-04-26 13:37:47.372720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.058 [2024-04-26 13:37:47.491054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.006 13:37:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:31.006 13:37:48 -- common/autotest_common.sh@850 -- # return 0 00:27:31.006 13:37:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:31.006 13:37:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:31.265 13:37:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:31.265 13:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.265 13:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.265 13:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.265 13:37:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.265 13:37:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:31.524 nvme0n1 00:27:31.524 13:37:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:31.524 13:37:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.524 13:37:48 -- common/autotest_common.sh@10 -- # set +x 00:27:31.524 13:37:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.524 13:37:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:31.524 13:37:48 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.782 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:31.782 Zero copy mechanism will not be used. 00:27:31.782 Running I/O for 2 seconds... 00:27:31.782 [2024-04-26 13:37:49.091797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.782 [2024-04-26 13:37:49.091874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.782 [2024-04-26 13:37:49.091892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.782 [2024-04-26 13:37:49.096371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.782 [2024-04-26 13:37:49.096420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.782 [2024-04-26 13:37:49.096436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.782 [2024-04-26 13:37:49.101351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.101399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.101415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.106497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.106537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.106552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.109993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.110030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.110044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.114476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.114522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.114537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.119290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.119329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.119343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.122635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.122673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.122687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.126973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.127010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.127025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.131573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.131611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.131626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.136360] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.136398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.136412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.139868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.139905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.139918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.144464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.144501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.144514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.149433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.149471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.149485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.153533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.153571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.153585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.157157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.157193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.157207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.161516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.161553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.161567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.165807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.165842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.165856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.169028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.169065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.169079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.173624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.173662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.173676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.179023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.179061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.179075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.182345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.182389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.182403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.187193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.187230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.187243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.190255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.190291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.190304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.194338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.194384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.194399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.198934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.198971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.198984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.202214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.202250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.202263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.205699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.205736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.783 [2024-04-26 13:37:49.205750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.783 [2024-04-26 13:37:49.209691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.783 [2024-04-26 13:37:49.209728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.784 [2024-04-26 13:37:49.209742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.784 [2024-04-26 13:37:49.213701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.784 [2024-04-26 13:37:49.213737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.784 [2024-04-26 13:37:49.213751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:31.784 [2024-04-26 13:37:49.217417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.784 [2024-04-26 13:37:49.217455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.784 [2024-04-26 13:37:49.217469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:31.784 [2024-04-26 13:37:49.221152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.784 [2024-04-26 13:37:49.221188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.784 [2024-04-26 13:37:49.221202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:31.784 [2024-04-26 13:37:49.226746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.784 [2024-04-26 13:37:49.226805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.784 [2024-04-26 13:37:49.226820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:31.784 [2024-04-26 13:37:49.230714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:31.784 [2024-04-26 13:37:49.230752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.784 [2024-04-26 13:37:49.230765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.235271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.235323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.240092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.240130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.240144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.242851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.242886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.242900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.247509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.247545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.247559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.252274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.252311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.252325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.257142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.257178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.257193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.260390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.260424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.260438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.264915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.264952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.264966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.269005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.269041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.269054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.272950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.272986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.272999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.276975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.277012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.277025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.281433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.281469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.281482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.285400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.285439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.285452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.290137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.290174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.290187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.293261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.293297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.293311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.297428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.297465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.297479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.302545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.302581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.302595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.307410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.307446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.307459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.310312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.310348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.310371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.315325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.315362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.315375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.319872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.319908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.319921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.323258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.323293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.323306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.327750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.327799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.327814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.332465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.332501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.332514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.335572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.335609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.335622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.044 [2024-04-26 13:37:49.340461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.044 [2024-04-26 13:37:49.340497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.044 [2024-04-26 13:37:49.340511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.344007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.344043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.344057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.348645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.348683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.348696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.352054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.352088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.352101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.356306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.356342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.356356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.361018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.361053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.361066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.364454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.364491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.364505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.368988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.369024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.369037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.373090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.373126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.373140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.375885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.375919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.375933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.380284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.380321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.380335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.384598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.384635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.384649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.387937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.387972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.387985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.392315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.392351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.392366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.396957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.396992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.397006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.401266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.401301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.401315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.406523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.406557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.406570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.410062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.410096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.410109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.414011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.414047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.414060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.418242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.418278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.418291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.421793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.421827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.421841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.425887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.425922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.425936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.429722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.429759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.429772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.433633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.433669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.433682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.438267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.438304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.438318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.441588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.441625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.441638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.045 [2024-04-26 13:37:49.445923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.045 [2024-04-26 13:37:49.445960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.045 [2024-04-26 13:37:49.445973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.450436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.450471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.450485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.453586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.453622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.453635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.458658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.458694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.458708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.462264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.462299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.462313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.466466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.466502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.466516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.470610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.470646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.470659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.475451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.475489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.475502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.480608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.480644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.480658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.484379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.484413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.484427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.046 [2024-04-26 13:37:49.487645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.046 [2024-04-26 13:37:49.487680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.046 [2024-04-26 13:37:49.487694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.492868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.492904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.492918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.497737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.497774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.497801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.500795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.500828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.500841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.505156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.505192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.505205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.510345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.510393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.510407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.515189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.515225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.515238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.518241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.518275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.518288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.522436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.306 [2024-04-26 13:37:49.522472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.306 [2024-04-26 13:37:49.522486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.306 [2024-04-26 13:37:49.526389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.526425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.526438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.530847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.530884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.530898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.534641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.534679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.534693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.538741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.538791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.538808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.543084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.543121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.543135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.546659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.546694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.546708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.550980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.551016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.551031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.554443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.554478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.554491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.558551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.558588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.558601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.562885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.562921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.562934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.567070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.567106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.567119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.570919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.570955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.570968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.575369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.575406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.575419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.579302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.579338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.579352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.583413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.583450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.583464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.588013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.588064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.591827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.591863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.591877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.595940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.595977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.595990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.599937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.599972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.599986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.604652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.604688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.604702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.608687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.608723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.608737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.612410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.612453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.612466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.616558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.616594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.616607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.620582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.620618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.620632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.624909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.624945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.624958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.628102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.628137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.628150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.632616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.307 [2024-04-26 13:37:49.632653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.307 [2024-04-26 13:37:49.632666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.307 [2024-04-26 13:37:49.637297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.637333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.637347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.641511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.641549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.641563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.645243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.645279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.645292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.649256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.649291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.649306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.653505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.653541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.653554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.657907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.657943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.657957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.661888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.661923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.661936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.665613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.665648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.665662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.669483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.669520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.669533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.673725] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.673762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.673791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.677447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.677483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.677496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.682227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.682266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.682280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.686006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.686042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.686055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.690347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.690393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.690407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.693671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.693707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.693720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.698512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.698549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.698562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.701589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.701625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.701639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.705838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.705873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.705886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.710148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.710184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.710198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.713477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.713513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.713537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.718027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.718065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.718079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.722062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.722098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.722112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.726019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.726055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.726068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.729881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.729917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.729930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.734332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.734377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.734390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.738600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.738635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.738648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.742571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.308 [2024-04-26 13:37:49.742606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.308 [2024-04-26 13:37:49.742620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.308 [2024-04-26 13:37:49.747359] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.309 [2024-04-26 13:37:49.747395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.309 [2024-04-26 13:37:49.747408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.309 [2024-04-26 13:37:49.751364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.309 [2024-04-26 13:37:49.751400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.309 [2024-04-26 13:37:49.751414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.569 [2024-04-26 13:37:49.755322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.569 [2024-04-26 13:37:49.755357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.569 [2024-04-26 13:37:49.755371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.569 [2024-04-26 13:37:49.758461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.569 [2024-04-26 13:37:49.758497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.569 [2024-04-26 13:37:49.758510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.569 [2024-04-26 13:37:49.762471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.569 [2024-04-26 13:37:49.762506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.762519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.766644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.766679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.766693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.770139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.770175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.770188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.774730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.774766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.774792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.778503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.778539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.778552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.782663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.782698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.782712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.786420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.786456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.786470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.790920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.790955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.790969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.794309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.794345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.794367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.797527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.797564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.797579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.802024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.802061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.802074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.806721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.806757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.806771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.811034] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.811071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.811085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.813946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.813982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.813995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.819212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.819258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.819272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.823278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.823316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.823331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.827185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.827228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.827242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.831298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.831334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.831348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.834922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.834957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.834971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.839524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.839560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.839574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.843988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.844024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.844038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.848824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.848862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.848877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.852180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.852216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.570 [2024-04-26 13:37:49.852230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.570 [2024-04-26 13:37:49.857589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.570 [2024-04-26 13:37:49.857626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.857640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.862871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.862907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.862920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.866280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.866315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.866329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.870494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.870530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.870544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.875472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.875509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.875523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.880325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.880361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.880375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.883543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.883579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.883592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.888458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.888502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.888516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.892893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.892933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.892947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.898237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.898283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.898298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.901895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.901933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.901948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.906517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.906554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.906567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.911001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.911037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.911051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.914939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.914975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.914989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.919042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.919078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.919092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.923297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.923333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.923346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.927674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.927710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.927723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.930944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.930980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.930993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.936085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.936122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.936135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.940545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.940582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.940595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.943269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.943303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.571 [2024-04-26 13:37:49.943317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.571 [2024-04-26 13:37:49.948162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.571 [2024-04-26 13:37:49.948198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.948212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.952517] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.952550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.952564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.956401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.956437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.956451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.960564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.960600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.960615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.964543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.964579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.964592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.968419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.968465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.968486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.973492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.973531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.973545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.977263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.977300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.977313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.980885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.980922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.980935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.985219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.985256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.985270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.989454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.989491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.989509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.992995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.993033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.993047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:49.996822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:49.996857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:49.996871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:50.001084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:50.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:50.001134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:50.005238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:50.005275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:50.005289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:50.008508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:50.008544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:50.008558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.572 [2024-04-26 13:37:50.012818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.572 [2024-04-26 13:37:50.012854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.572 [2024-04-26 13:37:50.012867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.016977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.017013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.017027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.020478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.020515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.024487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.024533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.024547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.028239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.028280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.028295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.031426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.031464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.031478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.035970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.036015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.036030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.040665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.040701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.040715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.046079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.046121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.046135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.048999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.049036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.049049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.053698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.053735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.053748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.059079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.059116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.059130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.063129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.063164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.063177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.066542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.066578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.066591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.071160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.071197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.071210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.075035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.075070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.075085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.078419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.078454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.082775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.082825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.082840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.088074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.088121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.088136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.093028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.093074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.093088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.096104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.096140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.096153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.100479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.100520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.100534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.105608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.105649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.105663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.833 [2024-04-26 13:37:50.109630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.833 [2024-04-26 13:37:50.109665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.833 [2024-04-26 13:37:50.109679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.113268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.113312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.113326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.118012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.118049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.118062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.122644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.122680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.122694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.127881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.127917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.127931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.131228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.131263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.131277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.135202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.135238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.135251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.140354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.140392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.140405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.144855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.144891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.144904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.148219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.148255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.148268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.152327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.152364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.152377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.156259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.156296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.156309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.159768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.159814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.159828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.164405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.164442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.164455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.167956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.167991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.168005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.172371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.172407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.172421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.176875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.176910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.176923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.181933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.181969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.181982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.185223] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.185258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.185271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.189377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.189413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.189426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.194142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.194179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.194193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.197385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.197424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.197438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.201889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.201930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.201944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.206166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.206205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.206220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.209631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.209673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.209687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.214037] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.214079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.214094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.217976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.218017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.834 [2024-04-26 13:37:50.218031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.834 [2024-04-26 13:37:50.222714] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.834 [2024-04-26 13:37:50.222756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.222772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.226420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.226458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.226472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.230404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.230443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.230457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.234981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.235018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.235032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.238701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.238739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.238753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.243525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.243562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.243576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.248284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.248323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.248337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.251110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.251144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.251157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.255905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.255942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.255955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.261078] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.261115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.261129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.264367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.264403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.264416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.268531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.268567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.268581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.273819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.273855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.273868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.835 [2024-04-26 13:37:50.278665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:32.835 [2024-04-26 13:37:50.278701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.835 [2024-04-26 13:37:50.278714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.281326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.281360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.281373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.286353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.286404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.286418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.290207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.290244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.290264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.293862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.293898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.293911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.298382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.298428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.298442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.302498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.302533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.302546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.306553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.306589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.306603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.310060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.310095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.310109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.313906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.313973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.313987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.318569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.318606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.318619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.322852] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.322886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.322899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.325670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.325704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.325718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.330497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.330533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.330547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.333920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.333955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.095 [2024-04-26 13:37:50.333968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.095 [2024-04-26 13:37:50.338199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.095 [2024-04-26 13:37:50.338235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.338250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.342738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.342773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.342799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.347275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.347311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.347324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.351677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.351711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.351725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.354790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.354830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.354843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.359258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.359295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.359308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.362757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.362802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.362817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.367256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.367293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.367306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.370904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.370940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.370954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.374570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.374607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.374620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.379000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.379036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.379050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.383060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.383096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.383109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.386681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.386717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.386731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.390701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.390749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.394822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.394857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.394871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.398555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.398590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.398603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.402794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.402827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.402841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.407114] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.407150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.407163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.411347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.411384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.411398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.416073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.416110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.416124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.420080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.420116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.420130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.423639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.423675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.423688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.427922] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.427958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.427971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.432073] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.432109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.432122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.435372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.435406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.435420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.439860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.439896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.439909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.443321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.443356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.096 [2024-04-26 13:37:50.443369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.096 [2024-04-26 13:37:50.446983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.096 [2024-04-26 13:37:50.447019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.447033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.451652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.451693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.451707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.456597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.456638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.456653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.460157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.460192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.460206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.464620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.464670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.469945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.469981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.469995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.474793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.474828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.474841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.477936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.477970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.482491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.482535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.482548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.487001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.487037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.487051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.490275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.490310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.490324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.494198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.494234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.494247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.497997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.498032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.498046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.501874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.501912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.501926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.506305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.506341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.506354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.509860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.509895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.509908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.513553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.513590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.513603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.518251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.518290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.518303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.522757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.522803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.522817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.526224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.526261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.526274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.530584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.530621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.530635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.534464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.534500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.534514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.097 [2024-04-26 13:37:50.538748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.097 [2024-04-26 13:37:50.538797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.097 [2024-04-26 13:37:50.538812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.358 [2024-04-26 13:37:50.542304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.358 [2024-04-26 13:37:50.542339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.358 [2024-04-26 13:37:50.542353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.358 [2024-04-26 13:37:50.545831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.358 [2024-04-26 13:37:50.545865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.545879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.550246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.550283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.550296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.554693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.554736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.554751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.558534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.558575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.558590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.563112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.563149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.563164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.566755] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.566801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.566815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.570862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.570904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.570917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.575457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.575496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.575510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.579172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.579208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.579223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.583443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.583480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.583494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.587372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.587408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.587422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.591718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.591754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.591767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.596081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.596117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.596131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.599894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.599930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.599943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.603647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.603684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.603698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.607609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.607646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.607659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.612200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.612236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.612249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.616040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.616078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.616091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.619790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.619824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.619837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.624048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.624084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.624098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.628323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.628358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.628372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.631758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.631805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.631819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.636880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.636919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.636934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.641821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.641861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.641876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.644678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.644713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.644726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.649381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.649418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.649432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.654085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.654121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.654135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.359 [2024-04-26 13:37:50.658196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.359 [2024-04-26 13:37:50.658231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.359 [2024-04-26 13:37:50.658245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.662551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.662587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.662600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.665830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.665865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.665879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.670228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.670263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.670277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.674532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.674567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.674580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.678070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.678106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.678119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.682250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.682286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.682299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.686346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.686393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.686406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.690083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.690118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.690131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.694560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.694596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.694610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.698513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.698551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.698564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.701973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.702008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.702021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.706886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.706922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.706936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.711931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.711967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.711981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.714685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.714720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.714733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.719728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.719764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.719793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.724281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.724324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.724338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.727770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.727822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.727837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.732605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.732644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.732657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.736691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.736726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.736739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.740057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.740092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.740105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.744346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.744382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.744396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.749061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.749097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.749110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.752463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.752499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.752513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.757000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.757036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.757049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.360 [2024-04-26 13:37:50.761154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.360 [2024-04-26 13:37:50.761190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.360 [2024-04-26 13:37:50.761203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.765297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.765337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.765351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.769298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.769337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.769350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.773248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.773286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.773300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.776723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.776758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.776772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.781379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.781415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.781429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.785016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.785052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.785066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.789224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.789260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.789274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.793203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.793238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.793251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.796958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.796994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.797007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.801097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.801132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.801146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.361 [2024-04-26 13:37:50.805588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.361 [2024-04-26 13:37:50.805624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.361 [2024-04-26 13:37:50.805638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.809312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.809349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.809363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.813627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.813663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.813677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.817858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.817896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.817909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.821319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.821356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.821370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.825442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.825479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.825493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.829901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.829938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.829961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.834141] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.834177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.834190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.838334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.838384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.838403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.842887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.842923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.842937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.846949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.846985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.846999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.851306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.851342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.851356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.855108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.855148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.855165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.859595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.859631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.859645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.863982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.864019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.864032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.868021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.868057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.868070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.871711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.871747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.871761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.876339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.876376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.876390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.879268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.879305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.879318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.883362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.883399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.883412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.887766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.887816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.887829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.891584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.891622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.891636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.895488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.895526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.895540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.900351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.900388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.622 [2024-04-26 13:37:50.900402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.622 [2024-04-26 13:37:50.903462] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.622 [2024-04-26 13:37:50.903501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.903514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.907509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.907548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.907562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.911944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.911983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.911996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.915676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.915711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.915725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.919296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.919332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.919345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.923613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.923650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.923663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.928062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.928098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.928111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.931667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.931703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.931716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.936677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.936713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.936727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.940212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.940247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.940260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.944801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.944850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.949507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.949543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.949557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.954177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.954210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.954223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.959111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.959150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.959164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.962086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.962122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.962136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.966184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.966220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.966234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.970569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.970607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.970620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.974474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.974508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.974521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.978640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.978677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.978690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.982924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.982962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.982976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.987083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.987120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.987134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.991961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.991998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.992012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:50.994762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:50.994809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:50.994837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:51.000017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:51.000054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:51.000067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:51.004056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:51.004092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:51.004106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:51.007713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:51.007749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:51.007763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.623 [2024-04-26 13:37:51.011966] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.623 [2024-04-26 13:37:51.012003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.623 [2024-04-26 13:37:51.012016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.016386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.016423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.016436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.019818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.019853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.019867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.024028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.024064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.024078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.027622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.027657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.027670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.032050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.032087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.032101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.035613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.035650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.035663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.040704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.040742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.040755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.045286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.045338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.045353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.049754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.049806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.049821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.053112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.053150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.053163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.057101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.057137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.057151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.062446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.062488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.062502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.624 [2024-04-26 13:37:51.067900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.624 [2024-04-26 13:37:51.067944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.624 [2024-04-26 13:37:51.067958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.882 [2024-04-26 13:37:51.072542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.882 [2024-04-26 13:37:51.072579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.882 [2024-04-26 13:37:51.072593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:33.882 [2024-04-26 13:37:51.075237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.882 [2024-04-26 13:37:51.075271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.882 [2024-04-26 13:37:51.075285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:33.882 [2024-04-26 13:37:51.080001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.882 [2024-04-26 13:37:51.080038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.882 [2024-04-26 13:37:51.080052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:33.882 [2024-04-26 13:37:51.084247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c3ac90) 00:27:33.882 [2024-04-26 13:37:51.084283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.882 [2024-04-26 13:37:51.084297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:33.882 00:27:33.882 Latency(us) 00:27:33.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.882 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:33.882 nvme0n1 : 2.00 7472.04 934.00 0.00 0.00 2137.51 618.12 5749.29 00:27:33.882 =================================================================================================================== 00:27:33.882 Total : 7472.04 934.00 0.00 0.00 2137.51 618.12 5749.29 00:27:33.882 0 00:27:33.882 13:37:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:33.882 13:37:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:33.882 13:37:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:33.882 | .driver_specific 00:27:33.882 | .nvme_error 00:27:33.882 | .status_code 00:27:33.882 | .command_transient_transport_error' 00:27:33.882 13:37:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:34.140 13:37:51 -- host/digest.sh@71 -- # (( 482 > 0 )) 00:27:34.140 13:37:51 -- host/digest.sh@73 -- # killprocess 86141 00:27:34.140 13:37:51 -- common/autotest_common.sh@936 -- # '[' -z 86141 ']' 00:27:34.141 13:37:51 -- common/autotest_common.sh@940 -- # kill -0 86141 00:27:34.141 13:37:51 -- common/autotest_common.sh@941 -- # uname 00:27:34.141 13:37:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:34.141 13:37:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86141 00:27:34.141 13:37:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:34.141 13:37:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:34.141 13:37:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86141' 00:27:34.141 killing process with pid 86141 00:27:34.141 13:37:51 -- common/autotest_common.sh@955 -- # kill 86141 00:27:34.141 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.141 00:27:34.141 Latency(us) 00:27:34.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.141 =================================================================================================================== 00:27:34.141 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.141 13:37:51 -- common/autotest_common.sh@960 -- # wait 86141 00:27:34.399 13:37:51 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:34.399 13:37:51 -- host/digest.sh@54 -- # local rw bs qd 00:27:34.399 13:37:51 -- host/digest.sh@56 -- # rw=randwrite 00:27:34.399 13:37:51 -- host/digest.sh@56 -- # bs=4096 00:27:34.399 13:37:51 -- host/digest.sh@56 -- # qd=128 00:27:34.399 13:37:51 -- host/digest.sh@58 -- # bperfpid=86231 00:27:34.399 13:37:51 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:34.399 13:37:51 -- host/digest.sh@60 -- # waitforlisten 86231 /var/tmp/bperf.sock 00:27:34.399 13:37:51 -- common/autotest_common.sh@817 -- # '[' -z 86231 ']' 00:27:34.399 13:37:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:34.399 13:37:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:34.399 13:37:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:34.399 13:37:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.399 13:37:51 -- common/autotest_common.sh@10 -- # set +x 00:27:34.399 [2024-04-26 13:37:51.757926] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:34.399 [2024-04-26 13:37:51.758040] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86231 ] 00:27:34.658 [2024-04-26 13:37:51.897506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.658 [2024-04-26 13:37:52.014655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.593 13:37:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:35.593 13:37:52 -- common/autotest_common.sh@850 -- # return 0 00:27:35.593 13:37:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:35.593 13:37:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:35.852 13:37:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:35.852 13:37:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.852 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:27:35.852 13:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.852 13:37:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.852 13:37:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:36.110 nvme0n1 00:27:36.110 13:37:53 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:36.110 13:37:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.110 13:37:53 -- common/autotest_common.sh@10 -- # set +x 00:27:36.110 13:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.110 13:37:53 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:36.110 13:37:53 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:36.110 Running I/O for 2 seconds... 00:27:36.110 [2024-04-26 13:37:53.552688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ee5c8 00:27:36.110 [2024-04-26 13:37:53.553620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.110 [2024-04-26 13:37:53.553657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.564368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e2c28 00:27:36.369 [2024-04-26 13:37:53.565104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.565139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.578594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ecc78 00:27:36.369 [2024-04-26 13:37:53.580181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.580212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.589635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f1868 00:27:36.369 [2024-04-26 13:37:53.590829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.590860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.601231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e73e0 00:27:36.369 [2024-04-26 13:37:53.602529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.602562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.615535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e38d0 00:27:36.369 [2024-04-26 13:37:53.617507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.617538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.624030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f7970 00:27:36.369 [2024-04-26 13:37:53.625029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.625071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.636340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ec840 00:27:36.369 [2024-04-26 13:37:53.637426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.637489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.651248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ebb98 00:27:36.369 [2024-04-26 13:37:53.653126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.653168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.659801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5a90 00:27:36.369 [2024-04-26 13:37:53.660650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.660685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.673311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6cc8 00:27:36.369 [2024-04-26 13:37:53.674522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.674560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.684627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ee5c8 00:27:36.369 [2024-04-26 13:37:53.685697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.685734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.698646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fe720 00:27:36.369 [2024-04-26 13:37:53.700373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.700411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.709249] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e0ea0 00:27:36.369 [2024-04-26 13:37:53.711041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.711080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.722713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f9b30 00:27:36.369 [2024-04-26 13:37:53.724279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.724316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.734444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f0ff8 00:27:36.369 [2024-04-26 13:37:53.735889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.369 [2024-04-26 13:37:53.735949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:36.369 [2024-04-26 13:37:53.745310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e27f0 00:27:36.370 [2024-04-26 13:37:53.746286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.370 [2024-04-26 13:37:53.746326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.370 [2024-04-26 13:37:53.757017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f0350 00:27:36.370 [2024-04-26 13:37:53.757893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.370 [2024-04-26 13:37:53.757929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.370 [2024-04-26 13:37:53.772229] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fd640 00:27:36.370 [2024-04-26 13:37:53.774262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.370 [2024-04-26 13:37:53.774300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.370 [2024-04-26 13:37:53.780721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f8618 00:27:36.370 [2024-04-26 13:37:53.781756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.370 [2024-04-26 13:37:53.781799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:36.370 [2024-04-26 13:37:53.794451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f1430 00:27:36.370 [2024-04-26 13:37:53.795916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.370 [2024-04-26 13:37:53.795951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:36.370 [2024-04-26 13:37:53.803809] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f81e0 00:27:36.370 [2024-04-26 13:37:53.804504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.370 [2024-04-26 13:37:53.804542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:36.370 [2024-04-26 13:37:53.816495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e38d0 00:27:36.628 [2024-04-26 13:37:53.817267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.628 [2024-04-26 13:37:53.817304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:36.628 [2024-04-26 13:37:53.828467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f5378 00:27:36.628 [2024-04-26 13:37:53.829374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.628 [2024-04-26 13:37:53.829410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:36.628 [2024-04-26 13:37:53.840581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e4de8 00:27:36.628 [2024-04-26 13:37:53.841489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.628 [2024-04-26 13:37:53.841532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:36.628 [2024-04-26 13:37:53.855323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e23b8 00:27:36.628 [2024-04-26 13:37:53.857115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.628 [2024-04-26 13:37:53.857158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:36.628 [2024-04-26 13:37:53.863851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ddc00 00:27:36.628 [2024-04-26 13:37:53.864588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.628 [2024-04-26 13:37:53.864624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:36.628 [2024-04-26 13:37:53.877291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f46d0 00:27:36.628 [2024-04-26 13:37:53.878406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.878451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.888504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e6738 00:27:36.629 [2024-04-26 13:37:53.889461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.889498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.899888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f2d80 00:27:36.629 [2024-04-26 13:37:53.900643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.900679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.914042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fc128 00:27:36.629 [2024-04-26 13:37:53.915050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.915094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.925573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5ec8 00:27:36.629 [2024-04-26 13:37:53.926475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.926512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.937315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5220 00:27:36.629 [2024-04-26 13:37:53.938376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.938418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.949019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f4298 00:27:36.629 [2024-04-26 13:37:53.950146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.950185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.960922] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190df988 00:27:36.629 [2024-04-26 13:37:53.961549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.961588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.974696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e99d8 00:27:36.629 [2024-04-26 13:37:53.976192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.976229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.986074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f7da8 00:27:36.629 [2024-04-26 13:37:53.987429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.987467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:53.997932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f9f68 00:27:36.629 [2024-04-26 13:37:53.998920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:53.998956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:54.009360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6020 00:27:36.629 [2024-04-26 13:37:54.010232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:54.010268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:54.020828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f2948 00:27:36.629 [2024-04-26 13:37:54.021471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:54.021510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:54.034554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ee5c8 00:27:36.629 [2024-04-26 13:37:54.036062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:54.036098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:54.045231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f1ca0 00:27:36.629 [2024-04-26 13:37:54.047120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:54.047163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:54.058215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e7818 00:27:36.629 [2024-04-26 13:37:54.059310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:54.059355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:36.629 [2024-04-26 13:37:54.069592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fe2e8 00:27:36.629 [2024-04-26 13:37:54.070455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.629 [2024-04-26 13:37:54.070487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.082124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e8088 00:27:36.889 [2024-04-26 13:37:54.083141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.083178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.094003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e23b8 00:27:36.889 [2024-04-26 13:37:54.095351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.095385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.105338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f2948 00:27:36.889 [2024-04-26 13:37:54.106523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.106558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.116694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6cc8 00:27:36.889 [2024-04-26 13:37:54.117704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.117738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.127993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ef270 00:27:36.889 [2024-04-26 13:37:54.128838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.128872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.142946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f3e60 00:27:36.889 [2024-04-26 13:37:54.144823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.144860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.154276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e99d8 00:27:36.889 [2024-04-26 13:37:54.156009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.156048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.164519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e73e0 00:27:36.889 [2024-04-26 13:37:54.166356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.166402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.177353] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5220 00:27:36.889 [2024-04-26 13:37:54.178440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.178477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.188930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190de038 00:27:36.889 [2024-04-26 13:37:54.190174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.190210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.202520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e0a68 00:27:36.889 [2024-04-26 13:37:54.204386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.204421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.214216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190dfdc0 00:27:36.889 [2024-04-26 13:37:54.216113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.216153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.222816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fb480 00:27:36.889 [2024-04-26 13:37:54.223691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.223728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.237144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ebb98 00:27:36.889 [2024-04-26 13:37:54.238543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.238580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.248513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e6738 00:27:36.889 [2024-04-26 13:37:54.249717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.249752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.259915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e4578 00:27:36.889 [2024-04-26 13:37:54.260982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.261021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.270885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f2d80 00:27:36.889 [2024-04-26 13:37:54.271766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.271811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.283609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eff18 00:27:36.889 [2024-04-26 13:37:54.284648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.284684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.297230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ec408 00:27:36.889 [2024-04-26 13:37:54.298999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.889 [2024-04-26 13:37:54.299042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:36.889 [2024-04-26 13:37:54.308424] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eb760 00:27:36.889 [2024-04-26 13:37:54.310079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.890 [2024-04-26 13:37:54.310139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.890 [2024-04-26 13:37:54.320176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eaab8 00:27:36.890 [2024-04-26 13:37:54.321551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.890 [2024-04-26 13:37:54.321592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:36.890 [2024-04-26 13:37:54.331259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f5be8 00:27:36.890 [2024-04-26 13:37:54.332227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:36.890 [2024-04-26 13:37:54.332264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.344307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190de038 00:27:37.151 [2024-04-26 13:37:54.345203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.345241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.355757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f20d8 00:27:37.151 [2024-04-26 13:37:54.356523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.356561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.367163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190df550 00:27:37.151 [2024-04-26 13:37:54.367713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.367750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.380744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190de8a8 00:27:37.151 [2024-04-26 13:37:54.382135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.382171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.392112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6cc8 00:27:37.151 [2024-04-26 13:37:54.393360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.393396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.403425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f35f0 00:27:37.151 [2024-04-26 13:37:54.404496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.404531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.415208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fd208 00:27:37.151 [2024-04-26 13:37:54.416420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.416455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.429444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e23b8 00:27:37.151 [2024-04-26 13:37:54.431459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.431502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.438081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f4b08 00:27:37.151 [2024-04-26 13:37:54.439030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.439067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.450199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f8e88 00:27:37.151 [2024-04-26 13:37:54.451127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.451162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.463666] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e0a68 00:27:37.151 [2024-04-26 13:37:54.465119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.465154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.474797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e3060 00:27:37.151 [2024-04-26 13:37:54.475757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.475806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.486623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e1b48 00:27:37.151 [2024-04-26 13:37:54.487879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.487921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.501299] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e4140 00:27:37.151 [2024-04-26 13:37:54.503229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.503272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.510558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fc998 00:27:37.151 [2024-04-26 13:37:54.511390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.511429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.524847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eb328 00:27:37.151 [2024-04-26 13:37:54.526383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.526420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.536996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ebfd0 00:27:37.151 [2024-04-26 13:37:54.538518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.538554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.548391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e8d30 00:27:37.151 [2024-04-26 13:37:54.549765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.549808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.559760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f0bc0 00:27:37.151 [2024-04-26 13:37:54.560961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.560997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.573186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e7818 00:27:37.151 [2024-04-26 13:37:54.574895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.574934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.581582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f3a28 00:27:37.151 [2024-04-26 13:37:54.582291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.151 [2024-04-26 13:37:54.582326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:37.151 [2024-04-26 13:37:54.595749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fef90 00:27:37.418 [2024-04-26 13:37:54.597202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.597241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.608318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f2948 00:27:37.418 [2024-04-26 13:37:54.609819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.609867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.622047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f7da8 00:27:37.418 [2024-04-26 13:37:54.624059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.624102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.630660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f1868 00:27:37.418 [2024-04-26 13:37:54.631610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.631646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.645049] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6cc8 00:27:37.418 [2024-04-26 13:37:54.646327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.646381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.655963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5658 00:27:37.418 [2024-04-26 13:37:54.657118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.657159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.668201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6020 00:27:37.418 [2024-04-26 13:37:54.668973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.669010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.679671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6cc8 00:27:37.418 [2024-04-26 13:37:54.680341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.680380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.693397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190df118 00:27:37.418 [2024-04-26 13:37:54.694867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.694904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.704769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f2948 00:27:37.418 [2024-04-26 13:37:54.706052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.706088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.716657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e84c0 00:27:37.418 [2024-04-26 13:37:54.718092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.718128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.729107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e2c28 00:27:37.418 [2024-04-26 13:37:54.730693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.730729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.740143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fa7d8 00:27:37.418 [2024-04-26 13:37:54.741278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.741314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.751720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f46d0 00:27:37.418 [2024-04-26 13:37:54.753018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.753052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.765154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5ec8 00:27:37.418 [2024-04-26 13:37:54.766802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.418 [2024-04-26 13:37:54.766838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:37.418 [2024-04-26 13:37:54.776336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fb048 00:27:37.418 [2024-04-26 13:37:54.777831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.777865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:37.419 [2024-04-26 13:37:54.788044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fdeb0 00:27:37.419 [2024-04-26 13:37:54.789512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.789548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:37.419 [2024-04-26 13:37:54.799968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f4298 00:27:37.419 [2024-04-26 13:37:54.800953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.800989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:37.419 [2024-04-26 13:37:54.811328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eff18 00:27:37.419 [2024-04-26 13:37:54.812198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.812235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:37.419 [2024-04-26 13:37:54.822038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f9f68 00:27:37.419 [2024-04-26 13:37:54.823049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.823086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:37.419 [2024-04-26 13:37:54.837558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f46d0 00:27:37.419 [2024-04-26 13:37:54.839578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.839619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:37.419 [2024-04-26 13:37:54.846072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f8e88 00:27:37.419 [2024-04-26 13:37:54.847102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.847141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:37.419 [2024-04-26 13:37:54.860469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5a90 00:27:37.419 [2024-04-26 13:37:54.862030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.419 [2024-04-26 13:37:54.862072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.871950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e1710 00:27:37.678 [2024-04-26 13:37:54.873308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.873345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.883926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f2d80 00:27:37.678 [2024-04-26 13:37:54.884963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.884999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.895048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190dece0 00:27:37.678 [2024-04-26 13:37:54.896873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.896909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.907871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fa3a0 00:27:37.678 [2024-04-26 13:37:54.908961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.908997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.919241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ebfd0 00:27:37.678 [2024-04-26 13:37:54.920109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.920146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.930909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f57b0 00:27:37.678 [2024-04-26 13:37:54.932026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.932063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.942683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f9b30 00:27:37.678 [2024-04-26 13:37:54.943762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.943808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.954118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f46d0 00:27:37.678 [2024-04-26 13:37:54.954992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.955028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.966196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f1ca0 00:27:37.678 [2024-04-26 13:37:54.967410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.967446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.979719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f0788 00:27:37.678 [2024-04-26 13:37:54.981282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.981319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:54.989053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e95a0 00:27:37.678 [2024-04-26 13:37:54.989943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:54.989979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:55.003682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ef270 00:27:37.678 [2024-04-26 13:37:55.005186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:55.005229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:55.015218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f0350 00:27:37.678 [2024-04-26 13:37:55.016471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:55.016509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:55.027188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f81e0 00:27:37.678 [2024-04-26 13:37:55.028595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:55.028631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:55.038312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e6fa8 00:27:37.678 [2024-04-26 13:37:55.039299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:55.039336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:55.051774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eaef0 00:27:37.678 [2024-04-26 13:37:55.053244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:55.053283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:37.678 [2024-04-26 13:37:55.063142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f4f40 00:27:37.678 [2024-04-26 13:37:55.064387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.678 [2024-04-26 13:37:55.064425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:37.679 [2024-04-26 13:37:55.074612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e01f8 00:27:37.679 [2024-04-26 13:37:55.075732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.679 [2024-04-26 13:37:55.075770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:37.679 [2024-04-26 13:37:55.086111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e12d8 00:27:37.679 [2024-04-26 13:37:55.087070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.679 [2024-04-26 13:37:55.087111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:37.679 [2024-04-26 13:37:55.100541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e23b8 00:27:37.679 [2024-04-26 13:37:55.102345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.679 [2024-04-26 13:37:55.102392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:37.679 [2024-04-26 13:37:55.109079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e3060 00:27:37.679 [2024-04-26 13:37:55.109872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.679 [2024-04-26 13:37:55.109913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:37.679 [2024-04-26 13:37:55.123341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f57b0 00:27:37.679 [2024-04-26 13:37:55.124835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.679 [2024-04-26 13:37:55.124870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.134438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f8a50 00:27:37.938 [2024-04-26 13:37:55.135456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.135494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.146127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e0630 00:27:37.938 [2024-04-26 13:37:55.147323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.147358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.159622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fd208 00:27:37.938 [2024-04-26 13:37:55.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.161216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.170939] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fa7d8 00:27:37.938 [2024-04-26 13:37:55.172361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.172400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.184917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e4140 00:27:37.938 [2024-04-26 13:37:55.186986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.187022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.193458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fc998 00:27:37.938 [2024-04-26 13:37:55.194520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.194556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.205483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e9e10 00:27:37.938 [2024-04-26 13:37:55.206061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.206098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.217563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190df118 00:27:37.938 [2024-04-26 13:37:55.218472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.218510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.228575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f1868 00:27:37.938 [2024-04-26 13:37:55.229292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.229329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.242639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fcdd0 00:27:37.938 [2024-04-26 13:37:55.243560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.243600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.253984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f1868 00:27:37.938 [2024-04-26 13:37:55.254741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.254792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.265313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5ec8 00:27:37.938 [2024-04-26 13:37:55.265900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.265937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.278949] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e3498 00:27:37.938 [2024-04-26 13:37:55.280348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.280385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.290302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e12d8 00:27:37.938 [2024-04-26 13:37:55.291556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.291594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.301719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e1b48 00:27:37.938 [2024-04-26 13:37:55.302837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.302874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.315983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190fcdd0 00:27:37.938 [2024-04-26 13:37:55.317932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.317972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.324491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e6b70 00:27:37.938 [2024-04-26 13:37:55.325435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.325471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.336657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e0ea0 00:27:37.938 [2024-04-26 13:37:55.337590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.337634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.348118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eaab8 00:27:37.938 [2024-04-26 13:37:55.348906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.348944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.362136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f4f40 00:27:37.938 [2024-04-26 13:37:55.363154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.363193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:37.938 [2024-04-26 13:37:55.372960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f8a50 00:27:37.938 [2024-04-26 13:37:55.374082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.938 [2024-04-26 13:37:55.374119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.387222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f4f40 00:27:38.198 [2024-04-26 13:37:55.389041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.389079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.395711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190dfdc0 00:27:38.198 [2024-04-26 13:37:55.396539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.396575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.409956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190df988 00:27:38.198 [2024-04-26 13:37:55.411463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.411501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.421830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f9b30 00:27:38.198 [2024-04-26 13:37:55.422854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.422891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.433456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ef270 00:27:38.198 [2024-04-26 13:37:55.434720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.434757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.445062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190eaef0 00:27:38.198 [2024-04-26 13:37:55.446440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.446477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.457619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190de8a8 00:27:38.198 [2024-04-26 13:37:55.459209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.459245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.468745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190e5220 00:27:38.198 [2024-04-26 13:37:55.469873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.469912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.480266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190efae0 00:27:38.198 [2024-04-26 13:37:55.481156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.481194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.492486] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f6458 00:27:38.198 [2024-04-26 13:37:55.493560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.493597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.505931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190feb58 00:27:38.198 [2024-04-26 13:37:55.507474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.507513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.515181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ecc78 00:27:38.198 [2024-04-26 13:37:55.516056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.516093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.529475] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190ee190 00:27:38.198 [2024-04-26 13:37:55.531075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.531112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:38.198 [2024-04-26 13:37:55.540053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200d9b0) with pdu=0x2000190f4f40 00:27:38.198 [2024-04-26 13:37:55.541770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:38.198 [2024-04-26 13:37:55.541820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:38.198 00:27:38.198 Latency(us) 00:27:38.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.198 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:38.198 nvme0n1 : 2.01 21280.79 83.13 0.00 0.00 6005.37 2472.49 16086.11 00:27:38.198 =================================================================================================================== 00:27:38.198 Total : 21280.79 83.13 0.00 0.00 6005.37 2472.49 16086.11 00:27:38.198 0 00:27:38.198 13:37:55 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:38.198 13:37:55 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:38.198 13:37:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:38.198 13:37:55 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:38.198 | .driver_specific 00:27:38.198 | .nvme_error 00:27:38.198 | .status_code 00:27:38.198 | .command_transient_transport_error' 00:27:38.456 13:37:55 -- host/digest.sh@71 -- # (( 167 > 0 )) 00:27:38.456 13:37:55 -- host/digest.sh@73 -- # killprocess 86231 00:27:38.456 13:37:55 -- common/autotest_common.sh@936 -- # '[' -z 86231 ']' 00:27:38.456 13:37:55 -- common/autotest_common.sh@940 -- # kill -0 86231 00:27:38.456 13:37:55 -- common/autotest_common.sh@941 -- # uname 00:27:38.456 13:37:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:38.456 13:37:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86231 00:27:38.456 killing process with pid 86231 00:27:38.456 Received shutdown signal, test time was about 2.000000 seconds 00:27:38.456 00:27:38.456 Latency(us) 00:27:38.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.456 =================================================================================================================== 00:27:38.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.456 13:37:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:38.456 13:37:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:38.456 13:37:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86231' 00:27:38.456 13:37:55 -- common/autotest_common.sh@955 -- # kill 86231 00:27:38.456 13:37:55 -- common/autotest_common.sh@960 -- # wait 86231 00:27:38.714 13:37:56 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:38.714 13:37:56 -- host/digest.sh@54 -- # local rw bs qd 00:27:38.714 13:37:56 -- host/digest.sh@56 -- # rw=randwrite 00:27:38.714 13:37:56 -- host/digest.sh@56 -- # bs=131072 00:27:38.714 13:37:56 -- host/digest.sh@56 -- # qd=16 00:27:38.714 13:37:56 -- host/digest.sh@58 -- # bperfpid=86328 00:27:38.714 13:37:56 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:38.714 13:37:56 -- host/digest.sh@60 -- # waitforlisten 86328 /var/tmp/bperf.sock 00:27:38.714 13:37:56 -- common/autotest_common.sh@817 -- # '[' -z 86328 ']' 00:27:38.714 13:37:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.714 13:37:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:38.714 13:37:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.714 13:37:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:38.714 13:37:56 -- common/autotest_common.sh@10 -- # set +x 00:27:38.972 [2024-04-26 13:37:56.199669] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:38.972 [2024-04-26 13:37:56.200127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86328 ] 00:27:38.972 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:38.972 Zero copy mechanism will not be used. 00:27:38.972 [2024-04-26 13:37:56.341467] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.228 [2024-04-26 13:37:56.460164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.791 13:37:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:39.791 13:37:57 -- common/autotest_common.sh@850 -- # return 0 00:27:39.791 13:37:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:39.791 13:37:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:40.050 13:37:57 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:40.050 13:37:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.050 13:37:57 -- common/autotest_common.sh@10 -- # set +x 00:27:40.050 13:37:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.050 13:37:57 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.050 13:37:57 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.616 nvme0n1 00:27:40.616 13:37:57 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:40.616 13:37:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.616 13:37:57 -- common/autotest_common.sh@10 -- # set +x 00:27:40.616 13:37:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.616 13:37:57 -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:40.616 13:37:57 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:40.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:40.616 Zero copy mechanism will not be used. 00:27:40.616 Running I/O for 2 seconds... 00:27:40.616 [2024-04-26 13:37:57.964147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.616 [2024-04-26 13:37:57.964477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:57.964510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:57.969380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:57.969673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:57.969708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:57.974776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:57.975117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:57.975153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:57.980018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:57.980336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:57.980373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:57.985274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:57.985590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:57.985629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:57.990538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:57.990862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:57.990902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:57.995687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:57.996014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:57.996060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.000975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.001271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.001306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.006126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.006436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.006468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.011284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.011574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.011607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.016457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.016758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.016802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.021603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.021901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.021929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.026656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.026956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.026979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.031664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.031968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.032000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.036731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.037061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.037092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.041813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.042099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.042132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.046959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.047246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.047287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.052048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.052333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.052368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.057108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.057394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.057427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.617 [2024-04-26 13:37:58.062137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.617 [2024-04-26 13:37:58.062435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.617 [2024-04-26 13:37:58.062459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.067260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.067552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.067592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.072325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.072612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.072658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.077429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.077727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.077760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.082525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.082821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.082853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.087566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.087863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.087891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.092581] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.092891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.092933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.097741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.098049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.098083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.102884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.103190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.103216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.108201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.108513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.108543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.113354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.113652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.113681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.118519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.118826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.118863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.123676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.123976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.124020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.128897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.129185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.129223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.133991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.134275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.134307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.139053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.139340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.139370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.144089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.144373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.144406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.149133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.149419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.876 [2024-04-26 13:37:58.149452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.876 [2024-04-26 13:37:58.154164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.876 [2024-04-26 13:37:58.154466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.154494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.159336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.159624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.159658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.164364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.164652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.164689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.169478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.169806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.169840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.174667] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.174968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.174996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.179806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.180093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.180125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.184889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.185174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.185209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.190045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.190349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.190391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.195166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.195454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.195483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.200250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.200536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.200571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.205392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.205701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.205733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.210647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.210963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.210997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.215775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.216081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.216103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.220819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.221105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.221137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.225873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.226163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.226198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.231022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.231310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.231344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.236154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.236441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.236485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.241237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.241523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.241559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.246272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.246582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.246618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.251357] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.251654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.251680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.256514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.256812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.256842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.261649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.261954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.261988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.266832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.267119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.267160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.271930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.272217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.272245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.277152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.277457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.277490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.282347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.282661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.282690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.287538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.287865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.877 [2024-04-26 13:37:58.287897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.877 [2024-04-26 13:37:58.292680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.877 [2024-04-26 13:37:58.292996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.878 [2024-04-26 13:37:58.293029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.878 [2024-04-26 13:37:58.297801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.878 [2024-04-26 13:37:58.298096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.878 [2024-04-26 13:37:58.298128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.878 [2024-04-26 13:37:58.302932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.878 [2024-04-26 13:37:58.303223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.878 [2024-04-26 13:37:58.303254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:40.878 [2024-04-26 13:37:58.308109] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.878 [2024-04-26 13:37:58.308418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.878 [2024-04-26 13:37:58.308449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.878 [2024-04-26 13:37:58.313264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.878 [2024-04-26 13:37:58.313555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.878 [2024-04-26 13:37:58.313589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:40.878 [2024-04-26 13:37:58.318407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:40.878 [2024-04-26 13:37:58.318709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.878 [2024-04-26 13:37:58.318740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:40.878 [2024-04-26 13:37:58.323488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.323800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.323832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.328644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.328947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.328980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.333758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.334068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.334100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.338920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.339209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.339241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.344080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.344384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.344410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.349245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.349561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.349598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.354586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.354914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.354949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.359789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.360077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.360109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.364917] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.365205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.365237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.370045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.370341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.370382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.375203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.375490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.375522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.380284] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.380577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.380610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.385333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.385642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.385678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.390488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.390876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.390923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.395682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.395986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.396020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.400766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.401065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.401091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.405856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.406142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.406175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.411105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.411404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.411439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.416256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.416548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.416582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.421431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.421743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.421789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.426772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.427070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.137 [2024-04-26 13:37:58.427124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.137 [2024-04-26 13:37:58.431964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.137 [2024-04-26 13:37:58.432251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.432290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.437115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.437421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.437457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.442266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.442568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.442601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.447354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.447638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.447674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.452484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.452800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.452833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.457603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.457912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.457951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.462737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.463038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.463071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.467914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.468227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.468250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.473098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.473384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.473409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.478214] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.478513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.478545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.483311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.483609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.483642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.488436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.488741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.488771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.493566] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.493878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.493912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.498722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.499025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.499058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.503823] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.504107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.504140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.508883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.509167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.509198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.513973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.514260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.514292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.519072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.519358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.519390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.524188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.524479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.524511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.529294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.529579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.529614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.534336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.534645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.534677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.539420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.539730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.539766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.544495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.544795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.544828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.549541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.549843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.549874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.554647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.554949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.554983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.559639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.559939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.559972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.564720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.565023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.138 [2024-04-26 13:37:58.565057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.138 [2024-04-26 13:37:58.569812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.138 [2024-04-26 13:37:58.570106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.139 [2024-04-26 13:37:58.570131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.139 [2024-04-26 13:37:58.574902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.139 [2024-04-26 13:37:58.575196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.139 [2024-04-26 13:37:58.575222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.139 [2024-04-26 13:37:58.580085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.139 [2024-04-26 13:37:58.580371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.139 [2024-04-26 13:37:58.580405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.585244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.585533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.585566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.590309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.590632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.590661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.595437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.595721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.595754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.600432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.600671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.600694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.605414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.605701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.605736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.610503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.610816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.610848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.615638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.615952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.615979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.620714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.621013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.621036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.625855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.626147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.626180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.631035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.631335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.631380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.636154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.398 [2024-04-26 13:37:58.636439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.398 [2024-04-26 13:37:58.636467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.398 [2024-04-26 13:37:58.641170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.641454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.641478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.646231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.646528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.646552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.651294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.651587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.651620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.656459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.656761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.656807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.661553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.661861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.661896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.666682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.666980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.667003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.671757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.672059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.672099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.676892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.677181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.677213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.682014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.682313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.682344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.687292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.687579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.687613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.692320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.692607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.692639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.697421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.697738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.697771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.702577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.702882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.702918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.707683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.707992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.708018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.712866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.713156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.713188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.717963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.718255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.718288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.723090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.723377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.723410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.728118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.728432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.728470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.733274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.733561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.733593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.738341] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.738673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.738706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.743465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.743753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.743796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.748519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.748833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.748866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.753614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.753913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.753946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.758718] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.759017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.759050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.763834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.764120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.764154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.768980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.769280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.769311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.774110] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.399 [2024-04-26 13:37:58.774406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.399 [2024-04-26 13:37:58.774472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.399 [2024-04-26 13:37:58.779234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.779521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.779558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.784321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.784609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.784641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.789422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.789708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.789732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.794560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.794856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.794889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.799634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.799933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.799957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.804698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.804995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.805028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.809805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.810094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.810127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.815045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.815353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.815385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.820144] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.820438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.820471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.825261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.825555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.825589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.830346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.830660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.830694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.835521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.835829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.835866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.840673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.840977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.841012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.400 [2024-04-26 13:37:58.845773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.400 [2024-04-26 13:37:58.846083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.400 [2024-04-26 13:37:58.846117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.659 [2024-04-26 13:37:58.850943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.659 [2024-04-26 13:37:58.851230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.659 [2024-04-26 13:37:58.851263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.856068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.856356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.856380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.861164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.861452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.861485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.866316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.866639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.866673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.871488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.871803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.871835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.876715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.877013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.877048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.881816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.882103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.882135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.886915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.887201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.887233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.891946] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.892231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.892263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.897054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.897344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.897385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.902158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.902479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.902523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.907311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.907612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.907638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.912386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.912672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.912706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.917476] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.917790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.917822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.922670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.922981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.923016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.927822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.928118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.928141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.933034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.933335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.933367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.938194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.938495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.938521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.943331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.943619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.943652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.948458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.948745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.948793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.953553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.953851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.953883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.958687] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.958999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.959033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.963848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.964141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.964175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.969026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.969311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.969334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.974103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.974404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.974447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.979261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.979549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.979583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.984320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.984609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.984643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.660 [2024-04-26 13:37:58.989464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.660 [2024-04-26 13:37:58.989749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.660 [2024-04-26 13:37:58.989792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:58.994596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:58.994894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:58.994937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:58.999720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.000020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.000053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.004854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.005147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.005179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.009929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.010213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.010246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.015079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.015362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.015395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.020172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.020470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.020502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.025283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.025579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.025610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.030407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.030694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.030726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.035523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.035833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.035865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.040615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.040914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.040946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.045694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.045993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.046025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.050843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.051130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.051162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.055918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.056202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.056234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.061000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.061285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.061317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.066056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.066341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.066382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.071137] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.071424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.071457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.076205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.076488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.076520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.081276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.081560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.081592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.086309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.086607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.086638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.091366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.091653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.091686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.096419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.096706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.096744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.101488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.101799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.101848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.661 [2024-04-26 13:37:59.106595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.661 [2024-04-26 13:37:59.106890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.661 [2024-04-26 13:37:59.106922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.921 [2024-04-26 13:37:59.111635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.921 [2024-04-26 13:37:59.111931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.921 [2024-04-26 13:37:59.111964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.921 [2024-04-26 13:37:59.116699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.921 [2024-04-26 13:37:59.117004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.921 [2024-04-26 13:37:59.117036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.921 [2024-04-26 13:37:59.121733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.921 [2024-04-26 13:37:59.122036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.921 [2024-04-26 13:37:59.122067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.921 [2024-04-26 13:37:59.126742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.921 [2024-04-26 13:37:59.127044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.921 [2024-04-26 13:37:59.127072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.921 [2024-04-26 13:37:59.131829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.921 [2024-04-26 13:37:59.132120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.132152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.136975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.137263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.137294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.142076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.142362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.142403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.147224] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.147512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.147545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.152347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.152636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.152669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.157451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.157746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.157789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.162514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.162815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.162846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.167621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.167921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.167954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.172726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.173032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.173065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.177894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.178179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.178212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.183039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.183326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.183359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.188082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.188377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.188417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.193221] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.193507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.193540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.198292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.198591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.198623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.203382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.203669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.203702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.208478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.208775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.208821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.213596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.213900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.213924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.218709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.219017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.219052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.223838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.224141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.224174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.228945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.229241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.229273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.234035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.234319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.234353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.239105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.239392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.239426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.244186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.244470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.244503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.249213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.249500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.249535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.254313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.254611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.254644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.259472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.259771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.259815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.264529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.264830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.922 [2024-04-26 13:37:59.264862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.922 [2024-04-26 13:37:59.269602] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.922 [2024-04-26 13:37:59.269907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.269939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.274676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.274975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.275008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.279746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.280048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.280081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.284826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.285133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.285165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.289889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.290175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.290207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.294921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.295209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.295241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.299992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.300282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.300316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.305011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.305298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.305331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.310128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.310422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.310446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.315156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.315441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.315474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.320192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.320478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.320511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.325258] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.325552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.325584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.330348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.330642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.330674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.335425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.335710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.335743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.340502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.340816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.340848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.345595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.345894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.345925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.350678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.350973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.351000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.355806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.356094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.356126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.360852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.361139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.361170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:41.923 [2024-04-26 13:37:59.365870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:41.923 [2024-04-26 13:37:59.366156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.923 [2024-04-26 13:37:59.366188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.370925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.371209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.371241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.375936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.376224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.376256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.381040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.381341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.381373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.386063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.386351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.386392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.391134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.391418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.391451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.396178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.396466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.396498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.401241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.401644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.401693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.406473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.406761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.406810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.411397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.411681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.411716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.416364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.416637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.416671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.421290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.421571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.421595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.426201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.426564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.426618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.431205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.431506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.431562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.435737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.435966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.435997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.440306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.440511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.440537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.444868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.445068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.445093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.449376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.449582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.449606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.453895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.454106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.454130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.458399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.458620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.458654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.462991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.183 [2024-04-26 13:37:59.463201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.183 [2024-04-26 13:37:59.463234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.183 [2024-04-26 13:37:59.467472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.467671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.467694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.471990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.472190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.472212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.476457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.476656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.476679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.480903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.481098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.481120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.485372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.485568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.485592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.489831] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.490033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.490056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.494405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.494622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.494648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.498934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.499142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.499169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.503490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.503713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.503740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.508031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.508239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.508265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.512575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.512790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.512816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.517143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.517346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.517378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.521616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.521840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.521866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.526180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.526402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.526425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.530706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.530924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.530948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.535234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.535460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.539700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.539919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.539942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.544191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.544387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.544409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.548671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.548880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.548903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.553205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.553416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.553438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.557668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.557890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.557928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.562216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.562427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.562451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.566705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.566914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.566937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.571237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.571435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.571473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.575758] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.575969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.575993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.580213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.580419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.580442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.584732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.584948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.584971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.589233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.589430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.589452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.593755] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.593978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.594011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.598219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.598426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.598449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.602731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.602963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.602987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.607203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.607397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.607426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.611634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.611841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.611864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.616168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.616361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.616385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.620650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.620860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.620883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.625230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.184 [2024-04-26 13:37:59.625494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.184 [2024-04-26 13:37:59.625532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.184 [2024-04-26 13:37:59.629673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.185 [2024-04-26 13:37:59.629902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.185 [2024-04-26 13:37:59.629931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.634228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.634449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.634476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.638708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.638923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.638948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.643226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.643431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.643455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.647839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.648037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.648060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.652274] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.652483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.652505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.656752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.656977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.657001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.661247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.661457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.661480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.665680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.665894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.665919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.670187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.670392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.670415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.674673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.674889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.674912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.679254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.679459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.679484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.683744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.683958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.683982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.688309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.688514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.688539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.692966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.693165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.693188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.444 [2024-04-26 13:37:59.697558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.444 [2024-04-26 13:37:59.697788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.444 [2024-04-26 13:37:59.697811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.702266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.702482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.702508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.706756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.707007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.707039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.711324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.711540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.711564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.715911] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.716119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.716142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.720596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.720825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.720865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.725290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.725505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.725528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.730081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.730284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.730308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.734850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.735090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.735116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.739280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.739536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.739568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.743838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.744076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.744101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.748495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.748724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.748750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.753368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.753573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.753603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.757990] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.758216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.758239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.762794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.763022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.763044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.767518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.767720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.767743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.772046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.772242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.772265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.776552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.776749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.776791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.781193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.781420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.781442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.785796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.785992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.786015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.790262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.790478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.790500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.794799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.794996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.795019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.799439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.799633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.799657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.804098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.804362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.804384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.808914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.809133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.809173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.813623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.813833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.813870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.818404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.818613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.818645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.823211] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.823495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.823517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.828062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.828288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.828311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.832814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.833052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.833074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.837640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.837835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.837871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.842196] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.842407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.842430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.846748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.846961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.846985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.851290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.851496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.851520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.855847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.856052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.856088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.860358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.860576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.860608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.864900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.865108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.865135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.869503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.869705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.869743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.874413] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.874647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.874681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.879170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.879374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.879400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.883646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.883861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.883886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.445 [2024-04-26 13:37:59.888419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.445 [2024-04-26 13:37:59.888629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.445 [2024-04-26 13:37:59.888652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.704 [2024-04-26 13:37:59.892998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.704 [2024-04-26 13:37:59.893192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.704 [2024-04-26 13:37:59.893215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.704 [2024-04-26 13:37:59.897516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.704 [2024-04-26 13:37:59.897713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.704 [2024-04-26 13:37:59.897755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.704 [2024-04-26 13:37:59.902190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.704 [2024-04-26 13:37:59.902404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.704 [2024-04-26 13:37:59.902434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.704 [2024-04-26 13:37:59.906737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.704 [2024-04-26 13:37:59.906974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.906998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.911293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.911505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.911529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.915766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.916006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.916030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.920292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.920492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.920515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.924895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.925123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.925146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.929448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.929650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.929673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.933963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.934160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.934183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.938620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.938830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.938853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.943291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.943486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.943508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.947937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.948148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.948170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.952492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.952690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.952711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:42.705 [2024-04-26 13:37:59.956972] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x200db50) with pdu=0x2000190fef90 00:27:42.705 [2024-04-26 13:37:59.957167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.705 [2024-04-26 13:37:59.957190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:42.705 00:27:42.705 Latency(us) 00:27:42.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.705 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:42.705 nvme0n1 : 2.00 6250.27 781.28 0.00 0.00 2554.30 1660.74 5391.83 00:27:42.705 =================================================================================================================== 00:27:42.705 Total : 6250.27 781.28 0.00 0.00 2554.30 1660.74 5391.83 00:27:42.705 0 00:27:42.705 13:37:59 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:42.705 13:37:59 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:42.705 13:37:59 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:42.705 | .driver_specific 00:27:42.705 | .nvme_error 00:27:42.705 | .status_code 00:27:42.705 | .command_transient_transport_error' 00:27:42.705 13:37:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:42.964 13:38:00 -- host/digest.sh@71 -- # (( 403 > 0 )) 00:27:42.964 13:38:00 -- host/digest.sh@73 -- # killprocess 86328 00:27:42.964 13:38:00 -- common/autotest_common.sh@936 -- # '[' -z 86328 ']' 00:27:42.964 13:38:00 -- common/autotest_common.sh@940 -- # kill -0 86328 00:27:42.964 13:38:00 -- common/autotest_common.sh@941 -- # uname 00:27:42.964 13:38:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:42.964 13:38:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86328 00:27:42.964 killing process with pid 86328 00:27:42.964 Received shutdown signal, test time was about 2.000000 seconds 00:27:42.964 00:27:42.964 Latency(us) 00:27:42.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.964 =================================================================================================================== 00:27:42.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:42.964 13:38:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:42.964 13:38:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:42.964 13:38:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86328' 00:27:42.964 13:38:00 -- common/autotest_common.sh@955 -- # kill 86328 00:27:42.964 13:38:00 -- common/autotest_common.sh@960 -- # wait 86328 00:27:43.222 13:38:00 -- host/digest.sh@116 -- # killprocess 86001 00:27:43.222 13:38:00 -- common/autotest_common.sh@936 -- # '[' -z 86001 ']' 00:27:43.222 13:38:00 -- common/autotest_common.sh@940 -- # kill -0 86001 00:27:43.222 13:38:00 -- common/autotest_common.sh@941 -- # uname 00:27:43.222 13:38:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:43.222 13:38:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86001 00:27:43.222 killing process with pid 86001 00:27:43.222 13:38:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:43.222 13:38:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:43.222 13:38:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86001' 00:27:43.222 13:38:00 -- common/autotest_common.sh@955 -- # kill 86001 00:27:43.222 13:38:00 -- common/autotest_common.sh@960 -- # wait 86001 00:27:43.532 ************************************ 00:27:43.532 END TEST nvmf_digest_error 00:27:43.532 ************************************ 00:27:43.532 00:27:43.532 real 0m19.541s 00:27:43.532 user 0m37.597s 00:27:43.532 sys 0m4.970s 00:27:43.532 13:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:43.532 13:38:00 -- common/autotest_common.sh@10 -- # set +x 00:27:43.532 13:38:00 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:43.532 13:38:00 -- host/digest.sh@150 -- # nvmftestfini 00:27:43.532 13:38:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:43.532 13:38:00 -- nvmf/common.sh@117 -- # sync 00:27:43.822 13:38:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:43.822 13:38:00 -- nvmf/common.sh@120 -- # set +e 00:27:43.822 13:38:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:43.822 13:38:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:43.822 rmmod nvme_tcp 00:27:43.822 rmmod nvme_fabrics 00:27:43.822 rmmod nvme_keyring 00:27:43.822 13:38:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:43.822 13:38:01 -- nvmf/common.sh@124 -- # set -e 00:27:43.822 13:38:01 -- nvmf/common.sh@125 -- # return 0 00:27:43.822 13:38:01 -- nvmf/common.sh@478 -- # '[' -n 86001 ']' 00:27:43.822 13:38:01 -- nvmf/common.sh@479 -- # killprocess 86001 00:27:43.822 13:38:01 -- common/autotest_common.sh@936 -- # '[' -z 86001 ']' 00:27:43.822 13:38:01 -- common/autotest_common.sh@940 -- # kill -0 86001 00:27:43.822 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (86001) - No such process 00:27:43.822 Process with pid 86001 is not found 00:27:43.822 13:38:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 86001 is not found' 00:27:43.822 13:38:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:43.822 13:38:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:43.822 13:38:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:43.822 13:38:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:43.822 13:38:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:43.822 13:38:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.822 13:38:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.822 13:38:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.822 13:38:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:43.822 00:27:43.822 real 0m41.180s 00:27:43.822 user 1m17.808s 00:27:43.822 sys 0m10.617s 00:27:43.822 13:38:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:43.822 13:38:01 -- common/autotest_common.sh@10 -- # set +x 00:27:43.822 ************************************ 00:27:43.822 END TEST nvmf_digest 00:27:43.822 ************************************ 00:27:43.822 13:38:01 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:27:43.822 13:38:01 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:27:43.822 13:38:01 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:43.822 13:38:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:43.822 13:38:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:43.822 13:38:01 -- common/autotest_common.sh@10 -- # set +x 00:27:43.822 ************************************ 00:27:43.822 START TEST nvmf_mdns_discovery 00:27:43.822 ************************************ 00:27:43.822 13:38:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:27:44.090 * Looking for test storage... 00:27:44.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:44.090 13:38:01 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:44.090 13:38:01 -- nvmf/common.sh@7 -- # uname -s 00:27:44.090 13:38:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.090 13:38:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.090 13:38:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.090 13:38:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.090 13:38:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.090 13:38:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.090 13:38:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.090 13:38:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.090 13:38:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.090 13:38:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.090 13:38:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:27:44.090 13:38:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:27:44.090 13:38:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.090 13:38:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.090 13:38:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:44.090 13:38:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.091 13:38:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:44.091 13:38:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.091 13:38:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.091 13:38:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.091 13:38:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.091 13:38:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.091 13:38:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.091 13:38:01 -- paths/export.sh@5 -- # export PATH 00:27:44.091 13:38:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.091 13:38:01 -- nvmf/common.sh@47 -- # : 0 00:27:44.091 13:38:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.091 13:38:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.091 13:38:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.091 13:38:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.091 13:38:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.091 13:38:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.091 13:38:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.091 13:38:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:27:44.091 13:38:01 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:27:44.091 13:38:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:44.091 13:38:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.091 13:38:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:44.091 13:38:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:44.091 13:38:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:44.091 13:38:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.091 13:38:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.091 13:38:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.091 13:38:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:44.091 13:38:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:44.091 13:38:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:44.091 13:38:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:44.091 13:38:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:44.091 13:38:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:44.091 13:38:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.091 13:38:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.091 13:38:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:44.091 13:38:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:44.091 13:38:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:44.091 13:38:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:44.091 13:38:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:44.091 13:38:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.091 13:38:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:44.091 13:38:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:44.091 13:38:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:44.091 13:38:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:44.091 13:38:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:44.091 13:38:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:44.091 Cannot find device "nvmf_tgt_br" 00:27:44.091 13:38:01 -- nvmf/common.sh@155 -- # true 00:27:44.091 13:38:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:44.091 Cannot find device "nvmf_tgt_br2" 00:27:44.091 13:38:01 -- nvmf/common.sh@156 -- # true 00:27:44.091 13:38:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:44.091 13:38:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:44.091 Cannot find device "nvmf_tgt_br" 00:27:44.091 13:38:01 -- nvmf/common.sh@158 -- # true 00:27:44.091 13:38:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:44.091 Cannot find device "nvmf_tgt_br2" 00:27:44.091 13:38:01 -- nvmf/common.sh@159 -- # true 00:27:44.091 13:38:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:44.091 13:38:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:44.091 13:38:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:44.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:44.091 13:38:01 -- nvmf/common.sh@162 -- # true 00:27:44.091 13:38:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:44.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:44.091 13:38:01 -- nvmf/common.sh@163 -- # true 00:27:44.091 13:38:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:44.091 13:38:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:44.091 13:38:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:44.091 13:38:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:44.091 13:38:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:44.091 13:38:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:44.349 13:38:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:44.349 13:38:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:44.349 13:38:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:44.349 13:38:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:44.349 13:38:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:44.349 13:38:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:44.349 13:38:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:44.349 13:38:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:44.349 13:38:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:44.349 13:38:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:44.349 13:38:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:44.349 13:38:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:44.349 13:38:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:44.349 13:38:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:44.349 13:38:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:44.349 13:38:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:44.349 13:38:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:44.349 13:38:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:44.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:27:44.349 00:27:44.349 --- 10.0.0.2 ping statistics --- 00:27:44.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.349 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:27:44.349 13:38:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:44.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:44.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:27:44.349 00:27:44.349 --- 10.0.0.3 ping statistics --- 00:27:44.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.349 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:27:44.349 13:38:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:44.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:44.349 00:27:44.349 --- 10.0.0.1 ping statistics --- 00:27:44.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.349 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:44.349 13:38:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.349 13:38:01 -- nvmf/common.sh@422 -- # return 0 00:27:44.349 13:38:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:44.349 13:38:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.349 13:38:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:44.349 13:38:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:44.349 13:38:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.349 13:38:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:44.349 13:38:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:44.349 13:38:01 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:27:44.349 13:38:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:44.349 13:38:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:44.349 13:38:01 -- common/autotest_common.sh@10 -- # set +x 00:27:44.349 13:38:01 -- nvmf/common.sh@470 -- # nvmfpid=86621 00:27:44.349 13:38:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:27:44.349 13:38:01 -- nvmf/common.sh@471 -- # waitforlisten 86621 00:27:44.349 13:38:01 -- common/autotest_common.sh@817 -- # '[' -z 86621 ']' 00:27:44.349 13:38:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.349 13:38:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:44.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.349 13:38:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.349 13:38:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:44.349 13:38:01 -- common/autotest_common.sh@10 -- # set +x 00:27:44.349 [2024-04-26 13:38:01.745815] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:44.349 [2024-04-26 13:38:01.746511] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.608 [2024-04-26 13:38:01.882207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.608 [2024-04-26 13:38:02.010480] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.608 [2024-04-26 13:38:02.010548] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.608 [2024-04-26 13:38:02.010564] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.608 [2024-04-26 13:38:02.010574] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.608 [2024-04-26 13:38:02.010583] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.608 [2024-04-26 13:38:02.010622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.539 13:38:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:45.539 13:38:02 -- common/autotest_common.sh@850 -- # return 0 00:27:45.539 13:38:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:45.539 13:38:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:45.539 13:38:02 -- common/autotest_common.sh@10 -- # set +x 00:27:45.539 13:38:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.539 13:38:02 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:27:45.539 13:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.539 13:38:02 -- common/autotest_common.sh@10 -- # set +x 00:27:45.539 13:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.539 13:38:02 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:27:45.539 13:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.539 13:38:02 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 13:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:02 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:45.796 13:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.796 13:38:02 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 [2024-04-26 13:38:03.005354] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.796 13:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:45.796 13:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.796 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 [2024-04-26 13:38:03.017433] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:45.796 13:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:45.796 13:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.796 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 null0 00:27:45.796 13:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:45.796 13:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.796 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 null1 00:27:45.796 13:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:27:45.796 13:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.796 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 null2 00:27:45.796 13:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:27:45.796 13:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.796 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 null3 00:27:45.796 13:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:27:45.796 13:38:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:45.796 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 13:38:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@47 -- # hostpid=86671 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@48 -- # waitforlisten 86671 /tmp/host.sock 00:27:45.796 13:38:03 -- common/autotest_common.sh@817 -- # '[' -z 86671 ']' 00:27:45.796 13:38:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:45.796 13:38:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:45.796 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:45.796 13:38:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:45.796 13:38:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:45.796 13:38:03 -- common/autotest_common.sh@10 -- # set +x 00:27:45.796 13:38:03 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:45.797 [2024-04-26 13:38:03.146024] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:27:45.797 [2024-04-26 13:38:03.146911] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86671 ] 00:27:46.053 [2024-04-26 13:38:03.290197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.053 [2024-04-26 13:38:03.419541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.987 13:38:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:46.987 13:38:04 -- common/autotest_common.sh@850 -- # return 0 00:27:46.987 13:38:04 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:27:46.987 13:38:04 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:27:46.987 13:38:04 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:27:46.987 13:38:04 -- host/mdns_discovery.sh@57 -- # avahipid=86705 00:27:46.987 13:38:04 -- host/mdns_discovery.sh@58 -- # sleep 1 00:27:46.987 13:38:04 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:27:46.987 13:38:04 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:27:46.987 Process 1005 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:27:46.987 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:27:46.987 Successfully dropped root privileges. 00:27:46.987 avahi-daemon 0.8 starting up. 00:27:46.987 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:27:46.987 Successfully called chroot(). 00:27:46.987 Successfully dropped remaining capabilities. 00:27:47.919 No service file found in /etc/avahi/services. 00:27:47.919 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:47.919 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:27:47.919 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:47.919 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:27:47.919 Network interface enumeration completed. 00:27:47.919 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:27:47.919 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:27:47.919 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:27:47.919 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:27:47.919 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3964193123. 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:47.919 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.919 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.919 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:47.919 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.919 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.919 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@68 -- # sort 00:27:47.919 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@68 -- # xargs 00:27:47.919 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.919 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.919 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.919 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@64 -- # sort 00:27:47.919 13:38:05 -- host/mdns_discovery.sh@64 -- # xargs 00:27:47.919 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:48.178 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.178 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:48.178 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.178 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # sort 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # xargs 00:27:48.178 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:48.178 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # xargs 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # sort 00:27:48.178 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:48.178 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.178 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:48.178 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.178 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # sort 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@68 -- # xargs 00:27:48.178 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:27:48.178 [2024-04-26 13:38:05.588525] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.178 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:48.178 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # sort 00:27:48.178 13:38:05 -- host/mdns_discovery.sh@64 -- # xargs 00:27:48.178 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.436 13:38:05 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:27:48.436 13:38:05 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:48.436 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.436 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.436 [2024-04-26 13:38:05.662146] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.436 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.436 13:38:05 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:48.436 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.436 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.436 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.436 13:38:05 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:27:48.436 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.436 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.436 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.436 13:38:05 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:27:48.436 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.436 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.437 13:38:05 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:27:48.437 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.437 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.437 13:38:05 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:48.437 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.437 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 [2024-04-26 13:38:05.702092] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:48.437 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.437 13:38:05 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:48.437 13:38:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:48.437 13:38:05 -- common/autotest_common.sh@10 -- # set +x 00:27:48.437 [2024-04-26 13:38:05.710013] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:48.437 13:38:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:48.437 13:38:05 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=86751 00:27:48.437 13:38:05 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:27:48.437 13:38:05 -- host/mdns_discovery.sh@125 -- # sleep 5 00:27:49.380 [2024-04-26 13:38:06.488527] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:49.380 Established under name 'CDC' 00:27:49.642 [2024-04-26 13:38:06.888581] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:49.642 [2024-04-26 13:38:06.888644] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:27:49.642 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:49.642 cookie is 0 00:27:49.642 is_local: 1 00:27:49.642 our_own: 0 00:27:49.642 wide_area: 0 00:27:49.642 multicast: 1 00:27:49.642 cached: 1 00:27:49.642 [2024-04-26 13:38:06.988548] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:49.642 [2024-04-26 13:38:06.988609] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:27:49.642 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:49.642 cookie is 0 00:27:49.642 is_local: 1 00:27:49.642 our_own: 0 00:27:49.642 wide_area: 0 00:27:49.642 multicast: 1 00:27:49.642 cached: 1 00:27:50.576 [2024-04-26 13:38:07.900201] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:50.576 [2024-04-26 13:38:07.900266] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:50.576 [2024-04-26 13:38:07.900287] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:50.576 [2024-04-26 13:38:07.986368] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:27:50.576 [2024-04-26 13:38:07.999950] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:50.576 [2024-04-26 13:38:07.999999] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:50.576 [2024-04-26 13:38:08.000019] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:50.834 [2024-04-26 13:38:08.052508] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:50.834 [2024-04-26 13:38:08.052566] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:50.834 [2024-04-26 13:38:08.086743] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:27:50.834 [2024-04-26 13:38:08.142494] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:50.834 [2024-04-26 13:38:08.142559] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:53.403 13:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.403 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@80 -- # sort 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@80 -- # xargs 00:27:53.403 13:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@76 -- # xargs 00:27:53.403 13:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@76 -- # sort 00:27:53.403 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.403 13:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:53.403 13:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.403 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@68 -- # sort 00:27:53.403 13:38:10 -- host/mdns_discovery.sh@68 -- # xargs 00:27:53.403 13:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.661 13:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.661 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@64 -- # sort 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@64 -- # xargs 00:27:53.661 13:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:53.661 13:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.661 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # xargs 00:27:53.661 13:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:53.661 13:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.661 13:38:10 -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:53.661 13:38:10 -- host/mdns_discovery.sh@72 -- # xargs 00:27:53.661 13:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:53.661 13:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.661 13:38:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:53.661 13:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:53.661 13:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.661 13:38:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 13:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:27:53.661 13:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.661 13:38:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.661 13:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.661 13:38:11 -- host/mdns_discovery.sh@139 -- # sleep 1 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.045 13:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.045 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@64 -- # sort 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@64 -- # xargs 00:27:55.045 13:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:27:55.045 13:38:12 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:55.046 13:38:12 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:55.046 13:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.046 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 13:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.046 13:38:12 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:27:55.046 13:38:12 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:55.046 13:38:12 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:27:55.046 13:38:12 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:55.046 13:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.046 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 [2024-04-26 13:38:12.237429] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:55.046 [2024-04-26 13:38:12.238048] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:55.046 [2024-04-26 13:38:12.238088] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:55.046 [2024-04-26 13:38:12.238127] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:55.046 [2024-04-26 13:38:12.238143] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:55.046 13:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.046 13:38:12 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:27:55.046 13:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.046 13:38:12 -- common/autotest_common.sh@10 -- # set +x 00:27:55.046 [2024-04-26 13:38:12.245294] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:55.046 [2024-04-26 13:38:12.246021] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:55.046 [2024-04-26 13:38:12.246083] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:55.046 13:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.046 13:38:12 -- host/mdns_discovery.sh@149 -- # sleep 1 00:27:55.046 [2024-04-26 13:38:12.377171] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:27:55.046 [2024-04-26 13:38:12.377458] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:27:55.046 [2024-04-26 13:38:12.434569] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:27:55.046 [2024-04-26 13:38:12.434625] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:55.046 [2024-04-26 13:38:12.434633] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:55.046 [2024-04-26 13:38:12.434658] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:55.046 [2024-04-26 13:38:12.434706] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:27:55.046 [2024-04-26 13:38:12.434717] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:55.046 [2024-04-26 13:38:12.434723] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:55.046 [2024-04-26 13:38:12.434739] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:55.046 [2024-04-26 13:38:12.480270] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:55.046 [2024-04-26 13:38:12.480305] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:55.046 [2024-04-26 13:38:12.480356] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:27:55.046 [2024-04-26 13:38:12.480365] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:55.981 13:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.981 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@68 -- # sort 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@68 -- # xargs 00:27:55.981 13:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@64 -- # xargs 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@64 -- # sort 00:27:55.981 13:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.981 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:55.981 13:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:55.981 13:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.981 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:55.981 13:38:13 -- host/mdns_discovery.sh@72 -- # xargs 00:27:55.981 13:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:56.240 13:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.240 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@72 -- # xargs 00:27:56.240 13:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:56.240 13:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.240 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:56.240 13:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:56.240 13:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.240 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.240 [2024-04-26 13:38:13.554830] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:56.240 [2024-04-26 13:38:13.554879] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:56.240 [2024-04-26 13:38:13.554921] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:56.240 [2024-04-26 13:38:13.554936] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:56.240 13:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:27:56.240 13:38:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.240 13:38:13 -- common/autotest_common.sh@10 -- # set +x 00:27:56.240 [2024-04-26 13:38:13.562810] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:56.240 [2024-04-26 13:38:13.562870] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:56.240 [2024-04-26 13:38:13.562942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.240 [2024-04-26 13:38:13.562981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.240 [2024-04-26 13:38:13.562996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.240 [2024-04-26 13:38:13.563006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.240 [2024-04-26 13:38:13.563017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.240 [2024-04-26 13:38:13.563026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.240 [2024-04-26 13:38:13.563037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.240 [2024-04-26 13:38:13.563047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.240 [2024-04-26 13:38:13.563057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.240 13:38:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.240 13:38:13 -- host/mdns_discovery.sh@162 -- # sleep 1 00:27:56.240 [2024-04-26 13:38:13.570891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.240 [2024-04-26 13:38:13.570930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.240 [2024-04-26 13:38:13.570945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.240 [2024-04-26 13:38:13.570954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.240 [2024-04-26 13:38:13.570965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.240 [2024-04-26 13:38:13.570974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.240 [2024-04-26 13:38:13.570985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:56.241 [2024-04-26 13:38:13.570995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:56.241 [2024-04-26 13:38:13.571004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.572893] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.580848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.582923] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.241 [2024-04-26 13:38:13.583075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.583129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.583146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.241 [2024-04-26 13:38:13.583159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.583178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.583206] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.583218] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.583230] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.241 [2024-04-26 13:38:13.583247] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.590863] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.241 [2024-04-26 13:38:13.590969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.591020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.591037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.241 [2024-04-26 13:38:13.591049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.591066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.591081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.591091] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.591102] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.241 [2024-04-26 13:38:13.591118] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.592995] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.241 [2024-04-26 13:38:13.593078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.593125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.593141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.241 [2024-04-26 13:38:13.593151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.593167] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.593181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.593191] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.593200] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.241 [2024-04-26 13:38:13.593215] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.600926] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.241 [2024-04-26 13:38:13.601048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.601098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.601114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.241 [2024-04-26 13:38:13.601125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.601142] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.601156] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.601165] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.601175] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.241 [2024-04-26 13:38:13.601189] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.603049] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.241 [2024-04-26 13:38:13.603137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.603184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.603200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.241 [2024-04-26 13:38:13.603211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.603227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.603254] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.603265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.603274] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.241 [2024-04-26 13:38:13.603289] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.611011] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.241 [2024-04-26 13:38:13.611109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.611158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.611175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.241 [2024-04-26 13:38:13.611185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.611202] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.611217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.611226] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.611236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.241 [2024-04-26 13:38:13.611251] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.613105] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.241 [2024-04-26 13:38:13.613188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.613234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.613250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.241 [2024-04-26 13:38:13.613260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.613277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.613290] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.613300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.613309] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.241 [2024-04-26 13:38:13.613324] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.621075] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.241 [2024-04-26 13:38:13.621165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.621215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.621232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.241 [2024-04-26 13:38:13.621242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.621258] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.621272] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.621282] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.621291] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.241 [2024-04-26 13:38:13.621306] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.241 [2024-04-26 13:38:13.623159] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.241 [2024-04-26 13:38:13.623245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.623294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.241 [2024-04-26 13:38:13.623310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.241 [2024-04-26 13:38:13.623321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.241 [2024-04-26 13:38:13.623337] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.241 [2024-04-26 13:38:13.623363] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.241 [2024-04-26 13:38:13.623375] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.241 [2024-04-26 13:38:13.623384] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.241 [2024-04-26 13:38:13.623399] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.631135] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.242 [2024-04-26 13:38:13.631236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.631291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.631308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.242 [2024-04-26 13:38:13.631319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.631335] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.631350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.631359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.631369] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.242 [2024-04-26 13:38:13.631395] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.633214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.242 [2024-04-26 13:38:13.633302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.633349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.633365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.242 [2024-04-26 13:38:13.633375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.633392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.633406] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.633415] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.633424] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.242 [2024-04-26 13:38:13.633439] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.641200] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.242 [2024-04-26 13:38:13.641300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.641351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.641367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.242 [2024-04-26 13:38:13.641378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.641394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.641409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.641418] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.641428] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.242 [2024-04-26 13:38:13.641443] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.643265] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.242 [2024-04-26 13:38:13.643352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.643400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.643417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.242 [2024-04-26 13:38:13.643427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.643443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.643470] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.643481] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.643490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.242 [2024-04-26 13:38:13.643506] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.651263] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.242 [2024-04-26 13:38:13.651359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.651409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.651425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.242 [2024-04-26 13:38:13.651436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.651454] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.651480] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.651491] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.651501] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.242 [2024-04-26 13:38:13.651517] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.653322] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.242 [2024-04-26 13:38:13.653415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.653463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.653479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.242 [2024-04-26 13:38:13.653493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.653509] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.653524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.653533] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.653542] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.242 [2024-04-26 13:38:13.653557] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.661327] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.242 [2024-04-26 13:38:13.661434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.661485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.661501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.242 [2024-04-26 13:38:13.661512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.661529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.661544] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.661553] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.661563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.242 [2024-04-26 13:38:13.661578] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.663383] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.242 [2024-04-26 13:38:13.663469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.663517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.663533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.242 [2024-04-26 13:38:13.663544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.663560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.663586] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.663596] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.663606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.242 [2024-04-26 13:38:13.663621] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.671398] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.242 [2024-04-26 13:38:13.671508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.671564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.671580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.242 [2024-04-26 13:38:13.671591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.242 [2024-04-26 13:38:13.671608] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.242 [2024-04-26 13:38:13.671634] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.242 [2024-04-26 13:38:13.671645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.242 [2024-04-26 13:38:13.671655] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.242 [2024-04-26 13:38:13.671671] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.242 [2024-04-26 13:38:13.673438] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.242 [2024-04-26 13:38:13.673521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.242 [2024-04-26 13:38:13.673573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-04-26 13:38:13.673589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.243 [2024-04-26 13:38:13.673600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.243 [2024-04-26 13:38:13.673616] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.243 [2024-04-26 13:38:13.673630] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.243 [2024-04-26 13:38:13.673639] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.243 [2024-04-26 13:38:13.673649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.243 [2024-04-26 13:38:13.673664] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.243 [2024-04-26 13:38:13.681471] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.243 [2024-04-26 13:38:13.681594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-04-26 13:38:13.681645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-04-26 13:38:13.681662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.243 [2024-04-26 13:38:13.681673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.243 [2024-04-26 13:38:13.681691] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.243 [2024-04-26 13:38:13.681707] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.243 [2024-04-26 13:38:13.681716] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.243 [2024-04-26 13:38:13.681726] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.243 [2024-04-26 13:38:13.681742] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.243 [2024-04-26 13:38:13.683492] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.243 [2024-04-26 13:38:13.683580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-04-26 13:38:13.683628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.243 [2024-04-26 13:38:13.683644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.243 [2024-04-26 13:38:13.683655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.243 [2024-04-26 13:38:13.683671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.243 [2024-04-26 13:38:13.683697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.243 [2024-04-26 13:38:13.683709] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.243 [2024-04-26 13:38:13.683718] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.243 [2024-04-26 13:38:13.683734] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.502 [2024-04-26 13:38:13.691545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:27:56.502 [2024-04-26 13:38:13.691638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.502 [2024-04-26 13:38:13.691686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.502 [2024-04-26 13:38:13.691703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2131de0 with addr=10.0.0.3, port=4420 00:27:56.502 [2024-04-26 13:38:13.691714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2131de0 is same with the state(5) to be set 00:27:56.502 [2024-04-26 13:38:13.691730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2131de0 (9): Bad file descriptor 00:27:56.502 [2024-04-26 13:38:13.691757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:27:56.502 [2024-04-26 13:38:13.691768] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:27:56.502 [2024-04-26 13:38:13.691792] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:27:56.502 [2024-04-26 13:38:13.691810] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.502 [2024-04-26 13:38:13.693547] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:56.502 [2024-04-26 13:38:13.693627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.502 [2024-04-26 13:38:13.693673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.502 [2024-04-26 13:38:13.693689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2144ef0 with addr=10.0.0.2, port=4420 00:27:56.502 [2024-04-26 13:38:13.693699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2144ef0 is same with the state(5) to be set 00:27:56.502 [2024-04-26 13:38:13.693715] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2144ef0 (9): Bad file descriptor 00:27:56.502 [2024-04-26 13:38:13.693730] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.502 [2024-04-26 13:38:13.693739] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:56.502 [2024-04-26 13:38:13.693748] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.502 [2024-04-26 13:38:13.693762] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:56.502 [2024-04-26 13:38:13.695946] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:27:56.502 [2024-04-26 13:38:13.695983] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:56.502 [2024-04-26 13:38:13.696030] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:56.502 [2024-04-26 13:38:13.696918] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:56.502 [2024-04-26 13:38:13.696949] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:56.502 [2024-04-26 13:38:13.696968] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:56.502 [2024-04-26 13:38:13.782049] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:27:56.502 [2024-04-26 13:38:13.783009] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:57.437 13:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.437 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@68 -- # sort 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@68 -- # xargs 00:27:57.437 13:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@64 -- # sort 00:27:57.437 13:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:57.437 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@64 -- # xargs 00:27:57.437 13:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:27:57.437 13:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:57.437 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # xargs 00:27:57.437 13:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # sort -n 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@72 -- # xargs 00:27:57.437 13:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.437 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.437 13:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:57.437 13:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.437 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.437 13:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:27:57.437 13:38:14 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:57.438 13:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.438 13:38:14 -- common/autotest_common.sh@10 -- # set +x 00:27:57.438 13:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.438 13:38:14 -- host/mdns_discovery.sh@172 -- # sleep 1 00:27:57.702 [2024-04-26 13:38:14.888566] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:58.659 13:38:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.659 13:38:15 -- common/autotest_common.sh@10 -- # set +x 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@80 -- # sort 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@80 -- # xargs 00:27:58.659 13:38:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:58.659 13:38:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.659 13:38:15 -- common/autotest_common.sh@10 -- # set +x 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@68 -- # sort 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:27:58.659 13:38:15 -- host/mdns_discovery.sh@68 -- # xargs 00:27:58.659 13:38:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@64 -- # sort 00:27:58.659 13:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.659 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@64 -- # xargs 00:27:58.659 13:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:27:58.659 13:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.659 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:27:58.659 13:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:27:58.659 13:38:16 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:58.659 13:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.659 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.917 13:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.917 13:38:16 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:58.917 13:38:16 -- common/autotest_common.sh@638 -- # local es=0 00:27:58.917 13:38:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:58.917 13:38:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:58.917 13:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:58.917 13:38:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:58.917 13:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:58.917 13:38:16 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:27:58.917 13:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.917 13:38:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.917 [2024-04-26 13:38:16.124404] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:27:58.917 2024/04/26 13:38:16 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:58.917 request: 00:27:58.917 { 00:27:58.917 "method": "bdev_nvme_start_mdns_discovery", 00:27:58.917 "params": { 00:27:58.917 "name": "mdns", 00:27:58.917 "svcname": "_nvme-disc._http", 00:27:58.917 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:58.917 } 00:27:58.917 } 00:27:58.917 Got JSON-RPC error response 00:27:58.917 GoRPCClient: error on JSON-RPC call 00:27:58.917 13:38:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:58.917 13:38:16 -- common/autotest_common.sh@641 -- # es=1 00:27:58.917 13:38:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:58.917 13:38:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:58.917 13:38:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:58.917 13:38:16 -- host/mdns_discovery.sh@183 -- # sleep 5 00:27:59.175 [2024-04-26 13:38:16.509091] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:27:59.175 [2024-04-26 13:38:16.609083] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:27:59.433 [2024-04-26 13:38:16.709095] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:59.433 [2024-04-26 13:38:16.709149] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:27:59.433 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:59.433 cookie is 0 00:27:59.433 is_local: 1 00:27:59.433 our_own: 0 00:27:59.433 wide_area: 0 00:27:59.433 multicast: 1 00:27:59.433 cached: 1 00:27:59.433 [2024-04-26 13:38:16.809104] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:27:59.433 [2024-04-26 13:38:16.809162] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:27:59.433 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:27:59.433 cookie is 0 00:27:59.433 is_local: 1 00:27:59.433 our_own: 0 00:27:59.433 wide_area: 0 00:27:59.433 multicast: 1 00:27:59.433 cached: 1 00:28:00.366 [2024-04-26 13:38:17.721650] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:00.366 [2024-04-26 13:38:17.721702] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:00.366 [2024-04-26 13:38:17.721723] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:00.366 [2024-04-26 13:38:17.809828] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:28:00.624 [2024-04-26 13:38:17.821711] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:00.624 [2024-04-26 13:38:17.821746] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:00.624 [2024-04-26 13:38:17.821766] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:00.624 [2024-04-26 13:38:17.879358] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:28:00.624 [2024-04-26 13:38:17.879418] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:28:00.624 [2024-04-26 13:38:17.909723] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:28:00.624 [2024-04-26 13:38:17.976519] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:28:00.624 [2024-04-26 13:38:17.976579] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:28:03.936 13:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.936 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@80 -- # sort 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@80 -- # xargs 00:28:03.936 13:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # sort 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # xargs 00:28:03.936 13:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.936 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:03.936 13:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:28:03.936 13:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.936 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@64 -- # sort 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@64 -- # xargs 00:28:03.936 13:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:03.936 13:38:21 -- common/autotest_common.sh@638 -- # local es=0 00:28:03.936 13:38:21 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:03.936 13:38:21 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:03.936 13:38:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:03.936 13:38:21 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:03.936 13:38:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:03.936 13:38:21 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:28:03.936 13:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.936 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:03.936 [2024-04-26 13:38:21.335851] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:28:03.936 2024/04/26 13:38:21 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:28:03.936 request: 00:28:03.936 { 00:28:03.936 "method": "bdev_nvme_start_mdns_discovery", 00:28:03.936 "params": { 00:28:03.936 "name": "cdc", 00:28:03.936 "svcname": "_nvme-disc._tcp", 00:28:03.936 "hostnqn": "nqn.2021-12.io.spdk:test" 00:28:03.936 } 00:28:03.936 } 00:28:03.936 Got JSON-RPC error response 00:28:03.936 GoRPCClient: error on JSON-RPC call 00:28:03.936 13:38:21 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:03.936 13:38:21 -- common/autotest_common.sh@641 -- # es=1 00:28:03.936 13:38:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:03.936 13:38:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:03.936 13:38:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # sort 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:28:03.936 13:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.936 13:38:21 -- host/mdns_discovery.sh@76 -- # xargs 00:28:03.936 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:03.936 13:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:28:04.217 13:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.217 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@64 -- # sort 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@64 -- # xargs 00:28:04.217 13:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:28:04.217 13:38:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:04.217 13:38:21 -- common/autotest_common.sh@10 -- # set +x 00:28:04.217 13:38:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@197 -- # kill 86671 00:28:04.217 13:38:21 -- host/mdns_discovery.sh@200 -- # wait 86671 00:28:04.217 [2024-04-26 13:38:21.582217] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:28:04.492 13:38:21 -- host/mdns_discovery.sh@201 -- # kill 86751 00:28:04.492 Got SIGTERM, quitting. 00:28:04.492 13:38:21 -- host/mdns_discovery.sh@202 -- # kill 86705 00:28:04.492 13:38:21 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:28:04.492 13:38:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:04.492 13:38:21 -- nvmf/common.sh@117 -- # sync 00:28:04.492 Got SIGTERM, quitting. 00:28:04.492 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:28:04.492 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:28:04.492 avahi-daemon 0.8 exiting. 00:28:04.492 13:38:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.492 13:38:21 -- nvmf/common.sh@120 -- # set +e 00:28:04.492 13:38:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.492 13:38:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.492 rmmod nvme_tcp 00:28:04.492 rmmod nvme_fabrics 00:28:04.492 rmmod nvme_keyring 00:28:04.492 13:38:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.492 13:38:21 -- nvmf/common.sh@124 -- # set -e 00:28:04.492 13:38:21 -- nvmf/common.sh@125 -- # return 0 00:28:04.492 13:38:21 -- nvmf/common.sh@478 -- # '[' -n 86621 ']' 00:28:04.492 13:38:21 -- nvmf/common.sh@479 -- # killprocess 86621 00:28:04.492 13:38:21 -- common/autotest_common.sh@936 -- # '[' -z 86621 ']' 00:28:04.492 13:38:21 -- common/autotest_common.sh@940 -- # kill -0 86621 00:28:04.492 13:38:21 -- common/autotest_common.sh@941 -- # uname 00:28:04.492 13:38:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:04.492 13:38:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86621 00:28:04.492 13:38:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:04.492 13:38:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:04.492 killing process with pid 86621 00:28:04.492 13:38:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86621' 00:28:04.492 13:38:21 -- common/autotest_common.sh@955 -- # kill 86621 00:28:04.492 13:38:21 -- common/autotest_common.sh@960 -- # wait 86621 00:28:04.750 13:38:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:04.750 13:38:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:04.750 13:38:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:04.750 13:38:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:04.750 13:38:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:04.750 13:38:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.750 13:38:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:04.750 13:38:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.750 13:38:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:04.750 ************************************ 00:28:04.750 END TEST nvmf_mdns_discovery 00:28:04.750 ************************************ 00:28:04.750 00:28:04.750 real 0m20.952s 00:28:04.750 user 0m40.947s 00:28:04.750 sys 0m2.114s 00:28:04.750 13:38:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:04.750 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:28:04.750 13:38:22 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:28:04.750 13:38:22 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:04.750 13:38:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:04.750 13:38:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:04.750 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.010 ************************************ 00:28:05.010 START TEST nvmf_multipath 00:28:05.010 ************************************ 00:28:05.010 13:38:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:28:05.010 * Looking for test storage... 00:28:05.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:05.010 13:38:22 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:05.010 13:38:22 -- nvmf/common.sh@7 -- # uname -s 00:28:05.010 13:38:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.010 13:38:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.010 13:38:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.010 13:38:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.010 13:38:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.010 13:38:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.010 13:38:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.010 13:38:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.010 13:38:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.010 13:38:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.010 13:38:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:28:05.010 13:38:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:28:05.010 13:38:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.010 13:38:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.010 13:38:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:05.010 13:38:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.010 13:38:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:05.010 13:38:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.010 13:38:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.010 13:38:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.010 13:38:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.010 13:38:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.010 13:38:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.010 13:38:22 -- paths/export.sh@5 -- # export PATH 00:28:05.010 13:38:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.010 13:38:22 -- nvmf/common.sh@47 -- # : 0 00:28:05.010 13:38:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.010 13:38:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.010 13:38:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.010 13:38:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.010 13:38:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.010 13:38:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.010 13:38:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.010 13:38:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.010 13:38:22 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:05.010 13:38:22 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:05.010 13:38:22 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:05.010 13:38:22 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:05.010 13:38:22 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:05.010 13:38:22 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:28:05.010 13:38:22 -- host/multipath.sh@30 -- # nvmftestinit 00:28:05.010 13:38:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:05.010 13:38:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.010 13:38:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:05.010 13:38:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:05.010 13:38:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:05.010 13:38:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.010 13:38:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.010 13:38:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.010 13:38:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:05.010 13:38:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:05.010 13:38:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:05.010 13:38:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:05.010 13:38:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:05.010 13:38:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:05.010 13:38:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.010 13:38:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.010 13:38:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:05.010 13:38:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:05.010 13:38:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:05.010 13:38:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:05.010 13:38:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:05.010 13:38:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.010 13:38:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:05.010 13:38:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:05.010 13:38:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:05.010 13:38:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:05.010 13:38:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:05.010 13:38:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:05.010 Cannot find device "nvmf_tgt_br" 00:28:05.010 13:38:22 -- nvmf/common.sh@155 -- # true 00:28:05.010 13:38:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:05.010 Cannot find device "nvmf_tgt_br2" 00:28:05.010 13:38:22 -- nvmf/common.sh@156 -- # true 00:28:05.010 13:38:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:05.010 13:38:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:05.010 Cannot find device "nvmf_tgt_br" 00:28:05.010 13:38:22 -- nvmf/common.sh@158 -- # true 00:28:05.010 13:38:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:05.010 Cannot find device "nvmf_tgt_br2" 00:28:05.010 13:38:22 -- nvmf/common.sh@159 -- # true 00:28:05.010 13:38:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:05.269 13:38:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:05.269 13:38:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:05.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:05.269 13:38:22 -- nvmf/common.sh@162 -- # true 00:28:05.269 13:38:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:05.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:05.269 13:38:22 -- nvmf/common.sh@163 -- # true 00:28:05.269 13:38:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:05.269 13:38:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:05.269 13:38:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:05.269 13:38:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:05.269 13:38:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:05.269 13:38:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:05.269 13:38:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:05.269 13:38:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:05.269 13:38:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:05.269 13:38:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:05.269 13:38:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:05.269 13:38:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:05.269 13:38:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:05.269 13:38:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:05.269 13:38:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:05.269 13:38:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:05.269 13:38:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:05.269 13:38:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:05.269 13:38:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:05.269 13:38:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:05.269 13:38:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:05.269 13:38:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:05.269 13:38:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:05.269 13:38:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:05.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:28:05.269 00:28:05.269 --- 10.0.0.2 ping statistics --- 00:28:05.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.269 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:28:05.269 13:38:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:05.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:05.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:28:05.269 00:28:05.269 --- 10.0.0.3 ping statistics --- 00:28:05.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.269 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:28:05.269 13:38:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:05.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:28:05.269 00:28:05.269 --- 10.0.0.1 ping statistics --- 00:28:05.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.269 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:28:05.269 13:38:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.269 13:38:22 -- nvmf/common.sh@422 -- # return 0 00:28:05.269 13:38:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:05.269 13:38:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.269 13:38:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:05.269 13:38:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:05.269 13:38:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.269 13:38:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:05.269 13:38:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:05.529 13:38:22 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:28:05.529 13:38:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:05.529 13:38:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:05.529 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.529 13:38:22 -- nvmf/common.sh@470 -- # nvmfpid=87266 00:28:05.529 13:38:22 -- nvmf/common.sh@471 -- # waitforlisten 87266 00:28:05.529 13:38:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:05.529 13:38:22 -- common/autotest_common.sh@817 -- # '[' -z 87266 ']' 00:28:05.529 13:38:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.529 13:38:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:05.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.529 13:38:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.529 13:38:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:05.529 13:38:22 -- common/autotest_common.sh@10 -- # set +x 00:28:05.529 [2024-04-26 13:38:22.797272] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:28:05.529 [2024-04-26 13:38:22.797390] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.529 [2024-04-26 13:38:22.941700] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:05.788 [2024-04-26 13:38:23.062889] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.788 [2024-04-26 13:38:23.062971] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.788 [2024-04-26 13:38:23.062984] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.788 [2024-04-26 13:38:23.062994] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.788 [2024-04-26 13:38:23.063001] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.788 [2024-04-26 13:38:23.063190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.788 [2024-04-26 13:38:23.063199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.733 13:38:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:06.733 13:38:23 -- common/autotest_common.sh@850 -- # return 0 00:28:06.733 13:38:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:06.733 13:38:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:06.733 13:38:23 -- common/autotest_common.sh@10 -- # set +x 00:28:06.733 13:38:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.733 13:38:23 -- host/multipath.sh@33 -- # nvmfapp_pid=87266 00:28:06.733 13:38:23 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:06.991 [2024-04-26 13:38:24.153484] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.991 13:38:24 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:07.249 Malloc0 00:28:07.249 13:38:24 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:28:07.507 13:38:24 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:07.766 13:38:25 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.023 [2024-04-26 13:38:25.324489] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.023 13:38:25 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:08.282 [2024-04-26 13:38:25.592613] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:08.282 13:38:25 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:28:08.282 13:38:25 -- host/multipath.sh@44 -- # bdevperf_pid=87372 00:28:08.282 13:38:25 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:08.282 13:38:25 -- host/multipath.sh@47 -- # waitforlisten 87372 /var/tmp/bdevperf.sock 00:28:08.282 13:38:25 -- common/autotest_common.sh@817 -- # '[' -z 87372 ']' 00:28:08.282 13:38:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.282 13:38:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:08.282 13:38:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.282 13:38:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:08.282 13:38:25 -- common/autotest_common.sh@10 -- # set +x 00:28:09.659 13:38:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:09.659 13:38:26 -- common/autotest_common.sh@850 -- # return 0 00:28:09.659 13:38:26 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:09.659 13:38:26 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:28:09.916 Nvme0n1 00:28:09.916 13:38:27 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:28:10.481 Nvme0n1 00:28:10.481 13:38:27 -- host/multipath.sh@78 -- # sleep 1 00:28:10.481 13:38:27 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:28:11.412 13:38:28 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:28:11.412 13:38:28 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:11.669 13:38:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:11.927 13:38:29 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:28:11.927 13:38:29 -- host/multipath.sh@65 -- # dtrace_pid=87459 00:28:11.927 13:38:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87266 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:11.927 13:38:29 -- host/multipath.sh@66 -- # sleep 6 00:28:18.483 13:38:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:18.483 13:38:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:18.483 13:38:35 -- host/multipath.sh@67 -- # active_port=4421 00:28:18.483 13:38:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:18.483 Attaching 4 probes... 00:28:18.483 @path[10.0.0.2, 4421]: 16366 00:28:18.483 @path[10.0.0.2, 4421]: 16953 00:28:18.483 @path[10.0.0.2, 4421]: 17188 00:28:18.483 @path[10.0.0.2, 4421]: 16031 00:28:18.483 @path[10.0.0.2, 4421]: 16343 00:28:18.483 13:38:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:18.483 13:38:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:18.483 13:38:35 -- host/multipath.sh@69 -- # sed -n 1p 00:28:18.483 13:38:35 -- host/multipath.sh@69 -- # port=4421 00:28:18.483 13:38:35 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:18.483 13:38:35 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:18.483 13:38:35 -- host/multipath.sh@72 -- # kill 87459 00:28:18.483 13:38:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:18.483 13:38:35 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:28:18.483 13:38:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:18.483 13:38:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:18.741 13:38:36 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:28:18.741 13:38:36 -- host/multipath.sh@65 -- # dtrace_pid=87595 00:28:18.741 13:38:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87266 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:18.741 13:38:36 -- host/multipath.sh@66 -- # sleep 6 00:28:25.297 13:38:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:25.297 13:38:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:25.297 13:38:42 -- host/multipath.sh@67 -- # active_port=4420 00:28:25.297 13:38:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:25.297 Attaching 4 probes... 00:28:25.297 @path[10.0.0.2, 4420]: 16540 00:28:25.297 @path[10.0.0.2, 4420]: 16836 00:28:25.297 @path[10.0.0.2, 4420]: 16862 00:28:25.297 @path[10.0.0.2, 4420]: 17030 00:28:25.297 @path[10.0.0.2, 4420]: 17009 00:28:25.297 13:38:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:25.297 13:38:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:25.297 13:38:42 -- host/multipath.sh@69 -- # sed -n 1p 00:28:25.297 13:38:42 -- host/multipath.sh@69 -- # port=4420 00:28:25.297 13:38:42 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:25.297 13:38:42 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:25.297 13:38:42 -- host/multipath.sh@72 -- # kill 87595 00:28:25.297 13:38:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:25.297 13:38:42 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:28:25.297 13:38:42 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:25.297 13:38:42 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:25.555 13:38:42 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:28:25.555 13:38:42 -- host/multipath.sh@65 -- # dtrace_pid=87726 00:28:25.555 13:38:42 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87266 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:25.555 13:38:42 -- host/multipath.sh@66 -- # sleep 6 00:28:32.149 13:38:48 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:32.149 13:38:48 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:32.149 13:38:49 -- host/multipath.sh@67 -- # active_port=4421 00:28:32.149 13:38:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:32.149 Attaching 4 probes... 00:28:32.149 @path[10.0.0.2, 4421]: 12482 00:28:32.149 @path[10.0.0.2, 4421]: 16586 00:28:32.149 @path[10.0.0.2, 4421]: 16631 00:28:32.149 @path[10.0.0.2, 4421]: 16710 00:28:32.149 @path[10.0.0.2, 4421]: 16734 00:28:32.149 13:38:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:32.149 13:38:49 -- host/multipath.sh@69 -- # sed -n 1p 00:28:32.149 13:38:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:32.149 13:38:49 -- host/multipath.sh@69 -- # port=4421 00:28:32.149 13:38:49 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:32.149 13:38:49 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:32.149 13:38:49 -- host/multipath.sh@72 -- # kill 87726 00:28:32.149 13:38:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:32.149 13:38:49 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:32.149 13:38:49 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:32.149 13:38:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:32.406 13:38:49 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:32.406 13:38:49 -- host/multipath.sh@65 -- # dtrace_pid=87856 00:28:32.406 13:38:49 -- host/multipath.sh@66 -- # sleep 6 00:28:32.406 13:38:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87266 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:38.964 13:38:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:38.964 13:38:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:38.964 13:38:56 -- host/multipath.sh@67 -- # active_port= 00:28:38.964 13:38:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:38.964 Attaching 4 probes... 00:28:38.964 00:28:38.964 00:28:38.964 00:28:38.964 00:28:38.964 00:28:38.964 13:38:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:38.964 13:38:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:38.964 13:38:56 -- host/multipath.sh@69 -- # sed -n 1p 00:28:38.964 13:38:56 -- host/multipath.sh@69 -- # port= 00:28:38.964 13:38:56 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:38.964 13:38:56 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:38.964 13:38:56 -- host/multipath.sh@72 -- # kill 87856 00:28:38.964 13:38:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:38.964 13:38:56 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:38.964 13:38:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:39.222 13:38:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:39.480 13:38:56 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:39.480 13:38:56 -- host/multipath.sh@65 -- # dtrace_pid=87992 00:28:39.480 13:38:56 -- host/multipath.sh@66 -- # sleep 6 00:28:39.480 13:38:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87266 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:46.035 13:39:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:46.036 13:39:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:46.036 13:39:03 -- host/multipath.sh@67 -- # active_port=4421 00:28:46.036 13:39:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:46.036 Attaching 4 probes... 00:28:46.036 @path[10.0.0.2, 4421]: 16277 00:28:46.036 @path[10.0.0.2, 4421]: 15749 00:28:46.036 @path[10.0.0.2, 4421]: 16700 00:28:46.036 @path[10.0.0.2, 4421]: 16776 00:28:46.036 @path[10.0.0.2, 4421]: 16727 00:28:46.036 13:39:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:46.036 13:39:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:46.036 13:39:03 -- host/multipath.sh@69 -- # sed -n 1p 00:28:46.036 13:39:03 -- host/multipath.sh@69 -- # port=4421 00:28:46.036 13:39:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:46.036 13:39:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:46.036 13:39:03 -- host/multipath.sh@72 -- # kill 87992 00:28:46.036 13:39:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:46.036 13:39:03 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:46.036 [2024-04-26 13:39:03.302702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302834] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302871] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302916] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302949] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302982] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.302999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303081] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303159] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303168] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303193] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303275] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303299] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303307] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 [2024-04-26 13:39:03.303315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1e810 is same with the state(5) to be set 00:28:46.036 13:39:03 -- host/multipath.sh@101 -- # sleep 1 00:28:46.981 13:39:04 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:46.981 13:39:04 -- host/multipath.sh@65 -- # dtrace_pid=88128 00:28:46.981 13:39:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87266 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:46.981 13:39:04 -- host/multipath.sh@66 -- # sleep 6 00:28:53.548 13:39:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:53.548 13:39:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:53.548 13:39:10 -- host/multipath.sh@67 -- # active_port=4420 00:28:53.548 13:39:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:53.548 Attaching 4 probes... 00:28:53.548 @path[10.0.0.2, 4420]: 16009 00:28:53.548 @path[10.0.0.2, 4420]: 16400 00:28:53.548 @path[10.0.0.2, 4420]: 16485 00:28:53.548 @path[10.0.0.2, 4420]: 16453 00:28:53.548 @path[10.0.0.2, 4420]: 16226 00:28:53.548 13:39:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:53.548 13:39:10 -- host/multipath.sh@69 -- # sed -n 1p 00:28:53.548 13:39:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:53.548 13:39:10 -- host/multipath.sh@69 -- # port=4420 00:28:53.548 13:39:10 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:53.548 13:39:10 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:53.548 13:39:10 -- host/multipath.sh@72 -- # kill 88128 00:28:53.548 13:39:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:53.548 13:39:10 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:53.548 [2024-04-26 13:39:10.829104] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:53.548 13:39:10 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:53.806 13:39:11 -- host/multipath.sh@111 -- # sleep 6 00:29:00.374 13:39:17 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:29:00.374 13:39:17 -- host/multipath.sh@65 -- # dtrace_pid=88316 00:29:00.374 13:39:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87266 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:00.374 13:39:17 -- host/multipath.sh@66 -- # sleep 6 00:29:06.965 13:39:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:06.965 13:39:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:06.965 13:39:23 -- host/multipath.sh@67 -- # active_port=4421 00:29:06.965 13:39:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:06.965 Attaching 4 probes... 00:29:06.965 @path[10.0.0.2, 4421]: 15802 00:29:06.965 @path[10.0.0.2, 4421]: 15730 00:29:06.965 @path[10.0.0.2, 4421]: 15793 00:29:06.965 @path[10.0.0.2, 4421]: 16139 00:29:06.965 @path[10.0.0.2, 4421]: 15986 00:29:06.965 13:39:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:06.965 13:39:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:06.965 13:39:23 -- host/multipath.sh@69 -- # sed -n 1p 00:29:06.965 13:39:23 -- host/multipath.sh@69 -- # port=4421 00:29:06.965 13:39:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:06.965 13:39:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:06.965 13:39:23 -- host/multipath.sh@72 -- # kill 88316 00:29:06.965 13:39:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:06.965 13:39:23 -- host/multipath.sh@114 -- # killprocess 87372 00:29:06.965 13:39:23 -- common/autotest_common.sh@936 -- # '[' -z 87372 ']' 00:29:06.965 13:39:23 -- common/autotest_common.sh@940 -- # kill -0 87372 00:29:06.965 13:39:23 -- common/autotest_common.sh@941 -- # uname 00:29:06.965 13:39:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:06.965 13:39:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87372 00:29:06.965 13:39:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:06.965 13:39:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:06.965 killing process with pid 87372 00:29:06.965 13:39:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87372' 00:29:06.965 13:39:23 -- common/autotest_common.sh@955 -- # kill 87372 00:29:06.965 13:39:23 -- common/autotest_common.sh@960 -- # wait 87372 00:29:06.965 Connection closed with partial response: 00:29:06.965 00:29:06.965 00:29:06.965 13:39:23 -- host/multipath.sh@116 -- # wait 87372 00:29:06.965 13:39:23 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:06.965 [2024-04-26 13:38:25.659457] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:06.965 [2024-04-26 13:38:25.659580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87372 ] 00:29:06.965 [2024-04-26 13:38:25.798818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.965 [2024-04-26 13:38:25.922984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.965 Running I/O for 90 seconds... 00:29:06.965 [2024-04-26 13:38:36.145950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.965 [2024-04-26 13:38:36.146046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.965 [2024-04-26 13:38:36.146116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.965 [2024-04-26 13:38:36.146138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.965 [2024-04-26 13:38:36.146162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.965 [2024-04-26 13:38:36.146178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.965 [2024-04-26 13:38:36.146199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.965 [2024-04-26 13:38:36.146214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.965 [2024-04-26 13:38:36.146235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.965 [2024-04-26 13:38:36.146249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.965 [2024-04-26 13:38:36.146270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.966 [2024-04-26 13:38:36.146284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.966 [2024-04-26 13:38:36.146320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.966 [2024-04-26 13:38:36.146356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.966 [2024-04-26 13:38:36.146391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.966 [2024-04-26 13:38:36.146440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.966 [2024-04-26 13:38:36.146494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.146532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.146568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.146603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.146638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.146673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.146694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.146709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.147967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.147982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.966 [2024-04-26 13:38:36.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.966 [2024-04-26 13:38:36.148026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.148978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.148993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.967 [2024-04-26 13:38:36.149436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.967 [2024-04-26 13:38:36.149457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.149841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.149856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.968 [2024-04-26 13:38:36.150621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.150972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.150987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.968 [2024-04-26 13:38:36.151659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.968 [2024-04-26 13:38:36.151688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:36.151703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:36.151724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:36.151740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:36.151761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.969 [2024-04-26 13:38:36.151787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:36.151813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.969 [2024-04-26 13:38:36.151828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:36.151849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.969 [2024-04-26 13:38:36.151864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:36.151885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.969 [2024-04-26 13:38:36.151900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:36.151922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.969 [2024-04-26 13:38:36.151937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.690688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.690770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.690852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.690874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.690898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.690914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.690936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.690951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.690973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.690988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.691969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.691991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.692028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.692064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.692101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.692137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.692173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.692210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.969 [2024-04-26 13:38:42.692256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.969 [2024-04-26 13:38:42.692271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.692968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.692992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.970 [2024-04-26 13:38:42.693669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.970 [2024-04-26 13:38:42.693684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.693792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.693816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.693845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.693861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.693887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.693919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.693947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.693962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.693987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.971 [2024-04-26 13:38:42.694574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.694970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.694985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.971 [2024-04-26 13:38:42.695522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.971 [2024-04-26 13:38:42.695537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.695970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.695996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.696012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.696059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.696109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.696150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.696191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.696232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.972 [2024-04-26 13:38:42.696273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:42.696807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:42.696843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.815931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.815978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.816002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.816017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.972 [2024-04-26 13:38:49.816037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.972 [2024-04-26 13:38:49.816052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.816977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.816992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.973 [2024-04-26 13:38:49.817876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.973 [2024-04-26 13:38:49.817898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.817913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.817935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.817950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.817971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.817986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.818960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.818978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.974 [2024-04-26 13:38:49.819733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.974 [2024-04-26 13:38:49.819750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.819771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.819803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.819827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.819842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.819864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.819879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.819900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.819915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.819936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.819951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.819972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.819987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.820180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.820226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.820263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.820299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.820335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.820370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.820406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.820968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.820983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.821004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.975 [2024-04-26 13:38:49.821019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.821040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.821055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.821076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.821091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.821113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.821127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.821158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.821174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.975 [2024-04-26 13:38:49.821195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.975 [2024-04-26 13:38:49.821210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.976 [2024-04-26 13:38:49.821246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.976 [2024-04-26 13:38:49.821283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.976 [2024-04-26 13:38:49.821319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.976 [2024-04-26 13:38:49.821355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.821957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.821972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.822854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.822882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.822909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.822926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.822949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.822985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.976 [2024-04-26 13:38:49.823588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.976 [2024-04-26 13:38:49.823603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.823976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.823991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.824018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.824034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.824054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.824069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.824091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.824106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.824133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.824148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.824169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.824184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.824206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.824221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.824242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.824257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.835938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.835953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.836965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.836979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.837000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.837014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.977 [2024-04-26 13:38:49.837035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.977 [2024-04-26 13:38:49.837049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.837868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.978 [2024-04-26 13:38:49.837904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.978 [2024-04-26 13:38:49.837955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.837979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.978 [2024-04-26 13:38:49.837994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.978 [2024-04-26 13:38:49.838030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.978 [2024-04-26 13:38:49.838065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.978 [2024-04-26 13:38:49.838100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.978 [2024-04-26 13:38:49.838136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.978 [2024-04-26 13:38:49.838469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.978 [2024-04-26 13:38:49.838484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.838520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.838556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.838591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.838627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.838678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.838716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.838752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.838801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.838840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.838876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.838911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.838947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.838969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.838983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.839019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.839054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.979 [2024-04-26 13:38:49.839089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.839628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.839644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.840504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.840531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.840558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.840575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.979 [2024-04-26 13:38:49.840607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.979 [2024-04-26 13:38:49.840622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.840969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.840984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.841975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.842011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.842031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.842046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.842067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.842082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.980 [2024-04-26 13:38:49.842102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.980 [2024-04-26 13:38:49.842117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.842981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.842996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.843973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.843995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.844009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.844030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.844045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.844070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.844085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.844106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.844121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.981 [2024-04-26 13:38:49.844148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.981 [2024-04-26 13:38:49.844164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.844200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.844236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.844278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.844317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.844353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.844388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.844424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.844975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.844989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.845025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.845061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.982 [2024-04-26 13:38:49.845412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.845448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.845485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.845520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.845556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.845578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.845593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.846387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.846433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.982 [2024-04-26 13:38:49.846462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.982 [2024-04-26 13:38:49.846479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.846966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.847010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.847033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.847049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.847070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.847085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.847106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.847121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.847142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.847157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.847178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.847193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.847215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.847229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.983 [2024-04-26 13:38:49.856638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.983 [2024-04-26 13:38:49.856659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.856977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.856991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.857411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.857426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.984 [2024-04-26 13:38:49.858893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.984 [2024-04-26 13:38:49.858914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.858928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.858949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.858963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.858994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.985 [2024-04-26 13:38:49.859551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.985 [2024-04-26 13:38:49.859587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.985 [2024-04-26 13:38:49.859622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.985 [2024-04-26 13:38:49.859657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.985 [2024-04-26 13:38:49.859692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.985 [2024-04-26 13:38:49.859727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.985 [2024-04-26 13:38:49.859762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.859971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.859992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.985 [2024-04-26 13:38:49.860332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.985 [2024-04-26 13:38:49.860354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.860369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.986 [2024-04-26 13:38:49.860696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.860731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.860767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.860825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.860847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.860862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.861983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.861998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.986 [2024-04-26 13:38:49.862577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.986 [2024-04-26 13:38:49.862609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.862976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.862999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.863742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.863757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.864409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.864436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.864463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.864481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.864503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.864518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.864550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.864567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.987 [2024-04-26 13:38:49.864588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.987 [2024-04-26 13:38:49.864603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.864970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.864995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.988 [2024-04-26 13:38:49.865810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.988 [2024-04-26 13:38:49.865848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.988 [2024-04-26 13:38:49.865887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.988 [2024-04-26 13:38:49.865923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.988 [2024-04-26 13:38:49.865959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.865992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.988 [2024-04-26 13:38:49.866008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.866030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.988 [2024-04-26 13:38:49.866045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.988 [2024-04-26 13:38:49.866067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.988 [2024-04-26 13:38:49.866081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.866690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.866726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.866762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.866794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.866812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.874439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.874483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.874535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.874584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.874621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.989 [2024-04-26 13:38:49.874657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.874692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.874727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.874749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.874763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.875980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.875994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.876015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.876029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.876050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.876065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.876086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.876101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.876121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.989 [2024-04-26 13:38:49.876135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.989 [2024-04-26 13:38:49.876156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.876977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.876992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.877670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.990 [2024-04-26 13:38:49.877685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.990 [2024-04-26 13:38:49.878332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.878958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.878984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.879945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.879972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.991 [2024-04-26 13:38:49.880019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.991 [2024-04-26 13:38:49.880062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.991 [2024-04-26 13:38:49.880108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.991 [2024-04-26 13:38:49.880151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.991 [2024-04-26 13:38:49.880195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.991 [2024-04-26 13:38:49.880239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.991 [2024-04-26 13:38:49.880283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.991 [2024-04-26 13:38:49.880327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.991 [2024-04-26 13:38:49.880353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.992 [2024-04-26 13:38:49.880371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.992 [2024-04-26 13:38:49.880397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.992 [2024-04-26 13:38:49.880415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.992 [2024-04-26 13:38:49.880441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.993 [2024-04-26 13:38:49.880458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.993 [2024-04-26 13:38:49.880485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.993 [2024-04-26 13:38:49.880510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.993 [2024-04-26 13:38:49.880538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.993 [2024-04-26 13:38:49.880556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.993 [2024-04-26 13:38:49.880583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.993 [2024-04-26 13:38:49.880601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.993 [2024-04-26 13:38:49.880627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.993 [2024-04-26 13:38:49.880645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.880690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.880734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.880790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.880839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.880883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.880927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.880971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.880997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.881015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.881065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.994 [2024-04-26 13:38:49.881474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.881519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.881546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.881565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.882960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.882987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.994 [2024-04-26 13:38:49.883850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:06.994 [2024-04-26 13:38:49.883877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.883896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.883922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.883940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.883966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.883984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.884964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.884991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.885962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.885992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.995 [2024-04-26 13:38:49.886614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:06.995 [2024-04-26 13:38:49.886645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.886663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.886694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.886713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.886752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.886770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.886817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.886837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.886868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.886887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.886917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.886936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.886966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.886984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.887586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.887634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.887683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.887732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.887796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.887860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.887909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.887957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.887988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.996 [2024-04-26 13:38:49.888696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.888745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.888810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.888860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.888908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.888957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.888987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.889006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.889037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.889064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:06.996 [2024-04-26 13:38:49.889097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.996 [2024-04-26 13:38:49.889115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:38:49.889146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:38:49.889165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:38:49.889196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:38:49.889214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:38:49.889466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:38:49.889497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.303837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.303886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.303916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.303932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.303949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.303970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.303985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.303999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.304972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.304995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.305024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.305058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.997 [2024-04-26 13:39:03.305087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.997 [2024-04-26 13:39:03.305588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.997 [2024-04-26 13:39:03.305613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.305977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.305990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.306927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.306971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.306985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.307014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.307053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.307083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.307112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.307140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.307168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.998 [2024-04-26 13:39:03.307197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.998 [2024-04-26 13:39:03.307226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.998 [2024-04-26 13:39:03.307241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.307982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.307997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.999 [2024-04-26 13:39:03.308433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23bf800 is same with the state(5) to be set 00:29:06.999 [2024-04-26 13:39:03.308470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:06.999 [2024-04-26 13:39:03.308481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:06.999 [2024-04-26 13:39:03.308491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122376 len:8 PRP1 0x0 PRP2 0x0 00:29:06.999 [2024-04-26 13:39:03.308504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308596] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23bf800 was disconnected and freed. reset controller. 00:29:06.999 [2024-04-26 13:39:03.308755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.999 [2024-04-26 13:39:03.308795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.999 [2024-04-26 13:39:03.308829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.999 [2024-04-26 13:39:03.308856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:06.999 [2024-04-26 13:39:03.308883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.999 [2024-04-26 13:39:03.308896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b75f0 is same with the state(5) to be set 00:29:06.999 [2024-04-26 13:39:03.310359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.999 [2024-04-26 13:39:03.310404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b75f0 (9): Bad file descriptor 00:29:06.999 [2024-04-26 13:39:03.310551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-04-26 13:39:03.310612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.999 [2024-04-26 13:39:03.310635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b75f0 with addr=10.0.0.2, port=4421 00:29:06.999 [2024-04-26 13:39:03.310651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b75f0 is same with the state(5) to be set 00:29:06.999 [2024-04-26 13:39:03.310676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b75f0 (9): Bad file descriptor 00:29:06.999 [2024-04-26 13:39:03.310699] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.999 [2024-04-26 13:39:03.310721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.999 [2024-04-26 13:39:03.310736] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.999 [2024-04-26 13:39:03.310771] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.999 [2024-04-26 13:39:03.310816] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.999 [2024-04-26 13:39:13.397332] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:06.999 Received shutdown signal, test time was about 55.690085 seconds 00:29:06.999 00:29:06.999 Latency(us) 00:29:06.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:06.999 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:06.999 Verification LBA range: start 0x0 length 0x4000 00:29:06.999 Nvme0n1 : 55.69 7027.48 27.45 0.00 0.00 18188.39 558.55 7107438.78 00:29:06.999 =================================================================================================================== 00:29:06.999 Total : 7027.48 27.45 0.00 0.00 18188.39 558.55 7107438.78 00:29:06.999 13:39:23 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:07.000 13:39:24 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:29:07.000 13:39:24 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:07.000 13:39:24 -- host/multipath.sh@125 -- # nvmftestfini 00:29:07.000 13:39:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:07.000 13:39:24 -- nvmf/common.sh@117 -- # sync 00:29:07.000 13:39:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:07.000 13:39:24 -- nvmf/common.sh@120 -- # set +e 00:29:07.000 13:39:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:07.000 13:39:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:07.000 rmmod nvme_tcp 00:29:07.000 rmmod nvme_fabrics 00:29:07.000 rmmod nvme_keyring 00:29:07.000 13:39:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:07.000 13:39:24 -- nvmf/common.sh@124 -- # set -e 00:29:07.000 13:39:24 -- nvmf/common.sh@125 -- # return 0 00:29:07.000 13:39:24 -- nvmf/common.sh@478 -- # '[' -n 87266 ']' 00:29:07.000 13:39:24 -- nvmf/common.sh@479 -- # killprocess 87266 00:29:07.000 13:39:24 -- common/autotest_common.sh@936 -- # '[' -z 87266 ']' 00:29:07.000 13:39:24 -- common/autotest_common.sh@940 -- # kill -0 87266 00:29:07.000 13:39:24 -- common/autotest_common.sh@941 -- # uname 00:29:07.000 13:39:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:07.000 13:39:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87266 00:29:07.000 13:39:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:07.000 13:39:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:07.000 killing process with pid 87266 00:29:07.000 13:39:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87266' 00:29:07.000 13:39:24 -- common/autotest_common.sh@955 -- # kill 87266 00:29:07.000 13:39:24 -- common/autotest_common.sh@960 -- # wait 87266 00:29:07.259 13:39:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:07.259 13:39:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:07.259 13:39:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:07.259 13:39:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:07.259 13:39:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:07.259 13:39:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.259 13:39:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.259 13:39:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.259 13:39:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:07.259 00:29:07.259 real 1m2.279s 00:29:07.259 user 2m56.641s 00:29:07.259 sys 0m14.094s 00:29:07.259 13:39:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:07.259 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.259 ************************************ 00:29:07.259 END TEST nvmf_multipath 00:29:07.259 ************************************ 00:29:07.259 13:39:24 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:07.259 13:39:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:07.259 13:39:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.259 13:39:24 -- common/autotest_common.sh@10 -- # set +x 00:29:07.259 ************************************ 00:29:07.259 START TEST nvmf_timeout 00:29:07.259 ************************************ 00:29:07.259 13:39:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:29:07.517 * Looking for test storage... 00:29:07.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:07.517 13:39:24 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:07.517 13:39:24 -- nvmf/common.sh@7 -- # uname -s 00:29:07.517 13:39:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.517 13:39:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.517 13:39:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.517 13:39:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.517 13:39:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.517 13:39:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.517 13:39:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.517 13:39:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.517 13:39:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.517 13:39:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.517 13:39:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:29:07.517 13:39:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:29:07.517 13:39:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.517 13:39:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.517 13:39:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:07.517 13:39:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.517 13:39:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:07.517 13:39:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.517 13:39:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.517 13:39:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.517 13:39:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.517 13:39:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.518 13:39:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.518 13:39:24 -- paths/export.sh@5 -- # export PATH 00:29:07.518 13:39:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.518 13:39:24 -- nvmf/common.sh@47 -- # : 0 00:29:07.518 13:39:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.518 13:39:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.518 13:39:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.518 13:39:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.518 13:39:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.518 13:39:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.518 13:39:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.518 13:39:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.518 13:39:24 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:07.518 13:39:24 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:07.518 13:39:24 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:07.518 13:39:24 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:07.518 13:39:24 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:07.518 13:39:24 -- host/timeout.sh@19 -- # nvmftestinit 00:29:07.518 13:39:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:07.518 13:39:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.518 13:39:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:07.518 13:39:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:07.518 13:39:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:07.518 13:39:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.518 13:39:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.518 13:39:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.518 13:39:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:07.518 13:39:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:07.518 13:39:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:07.518 13:39:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:07.518 13:39:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:07.518 13:39:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:07.518 13:39:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.518 13:39:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.518 13:39:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:07.518 13:39:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:07.518 13:39:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:07.518 13:39:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:07.518 13:39:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:07.518 13:39:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.518 13:39:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:07.518 13:39:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:07.518 13:39:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:07.518 13:39:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:07.518 13:39:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:07.518 13:39:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:07.518 Cannot find device "nvmf_tgt_br" 00:29:07.518 13:39:24 -- nvmf/common.sh@155 -- # true 00:29:07.518 13:39:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:07.518 Cannot find device "nvmf_tgt_br2" 00:29:07.518 13:39:24 -- nvmf/common.sh@156 -- # true 00:29:07.518 13:39:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:07.518 13:39:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:07.518 Cannot find device "nvmf_tgt_br" 00:29:07.518 13:39:24 -- nvmf/common.sh@158 -- # true 00:29:07.518 13:39:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:07.518 Cannot find device "nvmf_tgt_br2" 00:29:07.518 13:39:24 -- nvmf/common.sh@159 -- # true 00:29:07.518 13:39:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:07.518 13:39:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:07.518 13:39:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:07.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:07.518 13:39:24 -- nvmf/common.sh@162 -- # true 00:29:07.518 13:39:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:07.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:07.518 13:39:24 -- nvmf/common.sh@163 -- # true 00:29:07.518 13:39:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:07.518 13:39:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:07.518 13:39:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:07.518 13:39:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:07.518 13:39:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:07.776 13:39:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:07.776 13:39:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:07.776 13:39:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:07.776 13:39:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:07.776 13:39:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:07.776 13:39:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:07.776 13:39:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:07.776 13:39:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:07.776 13:39:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:07.776 13:39:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:07.776 13:39:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:07.776 13:39:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:07.776 13:39:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:07.776 13:39:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:07.776 13:39:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:07.776 13:39:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:07.776 13:39:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:07.776 13:39:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:07.776 13:39:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:07.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:29:07.776 00:29:07.776 --- 10.0.0.2 ping statistics --- 00:29:07.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.776 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:07.776 13:39:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:07.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:07.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:29:07.776 00:29:07.776 --- 10.0.0.3 ping statistics --- 00:29:07.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.776 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:29:07.776 13:39:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:07.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:29:07.776 00:29:07.776 --- 10.0.0.1 ping statistics --- 00:29:07.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.776 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:29:07.776 13:39:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.776 13:39:25 -- nvmf/common.sh@422 -- # return 0 00:29:07.776 13:39:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:07.776 13:39:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.776 13:39:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:07.776 13:39:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:07.776 13:39:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.776 13:39:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:07.776 13:39:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:07.776 13:39:25 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:29:07.776 13:39:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:07.776 13:39:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:07.776 13:39:25 -- common/autotest_common.sh@10 -- # set +x 00:29:07.776 13:39:25 -- nvmf/common.sh@470 -- # nvmfpid=88646 00:29:07.776 13:39:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:07.776 13:39:25 -- nvmf/common.sh@471 -- # waitforlisten 88646 00:29:07.776 13:39:25 -- common/autotest_common.sh@817 -- # '[' -z 88646 ']' 00:29:07.776 13:39:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.776 13:39:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:07.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.776 13:39:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.776 13:39:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:07.776 13:39:25 -- common/autotest_common.sh@10 -- # set +x 00:29:08.034 [2024-04-26 13:39:25.272460] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:08.034 [2024-04-26 13:39:25.272616] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.034 [2024-04-26 13:39:25.422874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:08.292 [2024-04-26 13:39:25.568428] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.292 [2024-04-26 13:39:25.568530] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.292 [2024-04-26 13:39:25.568546] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.292 [2024-04-26 13:39:25.568557] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.292 [2024-04-26 13:39:25.568566] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.292 [2024-04-26 13:39:25.569817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.292 [2024-04-26 13:39:25.569857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.235 13:39:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:09.235 13:39:26 -- common/autotest_common.sh@850 -- # return 0 00:29:09.235 13:39:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:09.235 13:39:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:09.235 13:39:26 -- common/autotest_common.sh@10 -- # set +x 00:29:09.235 13:39:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.235 13:39:26 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:09.235 13:39:26 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:09.235 [2024-04-26 13:39:26.597134] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.235 13:39:26 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:09.495 Malloc0 00:29:09.752 13:39:26 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:10.009 13:39:27 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:10.266 13:39:27 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:10.525 [2024-04-26 13:39:27.741391] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.525 13:39:27 -- host/timeout.sh@32 -- # bdevperf_pid=88739 00:29:10.525 13:39:27 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:10.525 13:39:27 -- host/timeout.sh@34 -- # waitforlisten 88739 /var/tmp/bdevperf.sock 00:29:10.525 13:39:27 -- common/autotest_common.sh@817 -- # '[' -z 88739 ']' 00:29:10.525 13:39:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:10.525 13:39:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:10.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:10.525 13:39:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:10.525 13:39:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:10.525 13:39:27 -- common/autotest_common.sh@10 -- # set +x 00:29:10.525 [2024-04-26 13:39:27.809606] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:10.525 [2024-04-26 13:39:27.809698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88739 ] 00:29:10.525 [2024-04-26 13:39:27.945811] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.782 [2024-04-26 13:39:28.079037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.713 13:39:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:11.714 13:39:28 -- common/autotest_common.sh@850 -- # return 0 00:29:11.714 13:39:28 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:11.714 13:39:29 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:12.281 NVMe0n1 00:29:12.281 13:39:29 -- host/timeout.sh@51 -- # rpc_pid=88785 00:29:12.281 13:39:29 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:12.281 13:39:29 -- host/timeout.sh@53 -- # sleep 1 00:29:12.281 Running I/O for 10 seconds... 00:29:13.215 13:39:30 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:13.476 [2024-04-26 13:39:30.692595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692703] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692755] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692764] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.692857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1151b40 is same with the state(5) to be set 00:29:13.476 [2024-04-26 13:39:30.693366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.476 [2024-04-26 13:39:30.693904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.476 [2024-04-26 13:39:30.693926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.476 [2024-04-26 13:39:30.693948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.476 [2024-04-26 13:39:30.693971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.476 [2024-04-26 13:39:30.693983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.476 [2024-04-26 13:39:30.693993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.477 [2024-04-26 13:39:30.694014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.477 [2024-04-26 13:39:30.694035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.477 [2024-04-26 13:39:30.694056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.477 [2024-04-26 13:39:30.694076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.477 [2024-04-26 13:39:30.694840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.477 [2024-04-26 13:39:30.694889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.477 [2024-04-26 13:39:30.694901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.694911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.694923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.694933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.694944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.694954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.694966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.694976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.694987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.694996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.478 [2024-04-26 13:39:30.695739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.478 [2024-04-26 13:39:30.695748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.695980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.695991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.479 [2024-04-26 13:39:30.696212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e59080 is same with the state(5) to be set 00:29:13.479 [2024-04-26 13:39:30.696236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:13.479 [2024-04-26 13:39:30.696244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:13.479 [2024-04-26 13:39:30.696259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77424 len:8 PRP1 0x0 PRP2 0x0 00:29:13.479 [2024-04-26 13:39:30.696269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:13.479 [2024-04-26 13:39:30.696329] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e59080 was disconnected and freed. reset controller. 00:29:13.479 [2024-04-26 13:39:30.696588] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.479 [2024-04-26 13:39:30.696677] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1def9f0 (9): Bad file descriptor 00:29:13.479 [2024-04-26 13:39:30.696804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-04-26 13:39:30.696856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:13.479 [2024-04-26 13:39:30.696873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1def9f0 with addr=10.0.0.2, port=4420 00:29:13.479 [2024-04-26 13:39:30.696884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def9f0 is same with the state(5) to be set 00:29:13.479 [2024-04-26 13:39:30.696903] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1def9f0 (9): Bad file descriptor 00:29:13.479 [2024-04-26 13:39:30.696920] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:13.479 [2024-04-26 13:39:30.696930] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:13.479 [2024-04-26 13:39:30.696941] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:13.479 [2024-04-26 13:39:30.696961] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:13.479 [2024-04-26 13:39:30.696972] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:13.479 13:39:30 -- host/timeout.sh@56 -- # sleep 2 00:29:15.382 [2024-04-26 13:39:32.697240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-04-26 13:39:32.697370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.382 [2024-04-26 13:39:32.697391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1def9f0 with addr=10.0.0.2, port=4420 00:29:15.382 [2024-04-26 13:39:32.697407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def9f0 is same with the state(5) to be set 00:29:15.382 [2024-04-26 13:39:32.697441] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1def9f0 (9): Bad file descriptor 00:29:15.382 [2024-04-26 13:39:32.697479] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.382 [2024-04-26 13:39:32.697497] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.382 [2024-04-26 13:39:32.697509] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.382 [2024-04-26 13:39:32.697541] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.382 [2024-04-26 13:39:32.697554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.382 13:39:32 -- host/timeout.sh@57 -- # get_controller 00:29:15.382 13:39:32 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:15.382 13:39:32 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:15.641 13:39:33 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:29:15.641 13:39:33 -- host/timeout.sh@58 -- # get_bdev 00:29:15.641 13:39:33 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:15.641 13:39:33 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:15.899 13:39:33 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:29:15.899 13:39:33 -- host/timeout.sh@61 -- # sleep 5 00:29:17.274 [2024-04-26 13:39:34.697740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.274 [2024-04-26 13:39:34.697882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.274 [2024-04-26 13:39:34.697903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1def9f0 with addr=10.0.0.2, port=4420 00:29:17.274 [2024-04-26 13:39:34.697920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1def9f0 is same with the state(5) to be set 00:29:17.274 [2024-04-26 13:39:34.697954] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1def9f0 (9): Bad file descriptor 00:29:17.274 [2024-04-26 13:39:34.697978] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.274 [2024-04-26 13:39:34.697989] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.274 [2024-04-26 13:39:34.698001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.274 [2024-04-26 13:39:34.698032] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.274 [2024-04-26 13:39:34.698045] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.805 [2024-04-26 13:39:36.698109] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.372 00:29:20.372 Latency(us) 00:29:20.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.372 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:20.372 Verification LBA range: start 0x0 length 0x4000 00:29:20.372 NVMe0n1 : 8.10 1180.38 4.61 15.80 0.00 106827.77 2055.45 7015926.69 00:29:20.372 =================================================================================================================== 00:29:20.372 Total : 1180.38 4.61 15.80 0.00 106827.77 2055.45 7015926.69 00:29:20.372 0 00:29:20.938 13:39:38 -- host/timeout.sh@62 -- # get_controller 00:29:20.938 13:39:38 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.938 13:39:38 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:29:21.504 13:39:38 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:29:21.504 13:39:38 -- host/timeout.sh@63 -- # get_bdev 00:29:21.505 13:39:38 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:29:21.505 13:39:38 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:29:21.763 13:39:38 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:29:21.763 13:39:38 -- host/timeout.sh@65 -- # wait 88785 00:29:21.763 13:39:38 -- host/timeout.sh@67 -- # killprocess 88739 00:29:21.763 13:39:38 -- common/autotest_common.sh@936 -- # '[' -z 88739 ']' 00:29:21.763 13:39:38 -- common/autotest_common.sh@940 -- # kill -0 88739 00:29:21.763 13:39:38 -- common/autotest_common.sh@941 -- # uname 00:29:21.763 13:39:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:21.763 13:39:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88739 00:29:21.763 killing process with pid 88739 00:29:21.763 Received shutdown signal, test time was about 9.399600 seconds 00:29:21.763 00:29:21.763 Latency(us) 00:29:21.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.764 =================================================================================================================== 00:29:21.764 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.764 13:39:38 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:21.764 13:39:38 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:21.764 13:39:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88739' 00:29:21.764 13:39:38 -- common/autotest_common.sh@955 -- # kill 88739 00:29:21.764 13:39:38 -- common/autotest_common.sh@960 -- # wait 88739 00:29:22.022 13:39:39 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.280 [2024-04-26 13:39:39.519077] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:22.281 13:39:39 -- host/timeout.sh@74 -- # bdevperf_pid=88945 00:29:22.281 13:39:39 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:29:22.281 13:39:39 -- host/timeout.sh@76 -- # waitforlisten 88945 /var/tmp/bdevperf.sock 00:29:22.281 13:39:39 -- common/autotest_common.sh@817 -- # '[' -z 88945 ']' 00:29:22.281 13:39:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:22.281 13:39:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:22.281 13:39:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:22.281 13:39:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:22.281 13:39:39 -- common/autotest_common.sh@10 -- # set +x 00:29:22.281 [2024-04-26 13:39:39.603025] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:22.281 [2024-04-26 13:39:39.603400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88945 ] 00:29:22.540 [2024-04-26 13:39:39.740841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.540 [2024-04-26 13:39:39.862242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.540 13:39:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:22.540 13:39:39 -- common/autotest_common.sh@850 -- # return 0 00:29:22.540 13:39:39 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:23.147 13:39:40 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:29:23.404 NVMe0n1 00:29:23.404 13:39:40 -- host/timeout.sh@84 -- # rpc_pid=88979 00:29:23.404 13:39:40 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:23.404 13:39:40 -- host/timeout.sh@86 -- # sleep 1 00:29:23.404 Running I/O for 10 seconds... 00:29:24.339 13:39:41 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.599 [2024-04-26 13:39:41.957672] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.599 [2024-04-26 13:39:41.957738] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957817] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957833] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.957874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345570 is same with the state(5) to be set 00:29:24.600 [2024-04-26 13:39:41.958482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.958981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.600 [2024-04-26 13:39:41.958990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.959002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.600 [2024-04-26 13:39:41.959011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.959023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.600 [2024-04-26 13:39:41.959032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.959043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.600 [2024-04-26 13:39:41.959052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.959063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.600 [2024-04-26 13:39:41.959072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.959083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.600 [2024-04-26 13:39:41.959093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.600 [2024-04-26 13:39:41.959104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.601 [2024-04-26 13:39:41.959592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.601 [2024-04-26 13:39:41.959820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.601 [2024-04-26 13:39:41.959830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.602 [2024-04-26 13:39:41.959851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.602 [2024-04-26 13:39:41.959872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.602 [2024-04-26 13:39:41.959893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.602 [2024-04-26 13:39:41.959914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.602 [2024-04-26 13:39:41.959935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.602 [2024-04-26 13:39:41.959956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.959976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.959987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.959997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.602 [2024-04-26 13:39:41.960445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.602 [2024-04-26 13:39:41.960456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.603 [2024-04-26 13:39:41.960805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.960977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.960988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.603 [2024-04-26 13:39:41.961304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.603 [2024-04-26 13:39:41.961332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:24.603 [2024-04-26 13:39:41.961342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:24.603 [2024-04-26 13:39:41.961351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78720 len:8 PRP1 0x0 PRP2 0x0 00:29:24.604 [2024-04-26 13:39:41.961360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.604 [2024-04-26 13:39:41.961417] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7e2f60 was disconnected and freed. reset controller. 00:29:24.604 [2024-04-26 13:39:41.961664] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.604 [2024-04-26 13:39:41.961753] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:24.604 [2024-04-26 13:39:41.961890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.604 [2024-04-26 13:39:41.961956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.604 [2024-04-26 13:39:41.961980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7799f0 with addr=10.0.0.2, port=4420 00:29:24.604 [2024-04-26 13:39:41.961999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7799f0 is same with the state(5) to be set 00:29:24.604 [2024-04-26 13:39:41.962020] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:24.604 [2024-04-26 13:39:41.962037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.604 [2024-04-26 13:39:41.962048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.604 [2024-04-26 13:39:41.962059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.604 [2024-04-26 13:39:41.962080] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.604 [2024-04-26 13:39:41.962090] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.604 13:39:41 -- host/timeout.sh@90 -- # sleep 1 00:29:25.550 [2024-04-26 13:39:42.962267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.550 [2024-04-26 13:39:42.962406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.550 [2024-04-26 13:39:42.962428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7799f0 with addr=10.0.0.2, port=4420 00:29:25.550 [2024-04-26 13:39:42.962461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7799f0 is same with the state(5) to be set 00:29:25.550 [2024-04-26 13:39:42.962504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:25.550 [2024-04-26 13:39:42.962540] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.550 [2024-04-26 13:39:42.962552] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.550 [2024-04-26 13:39:42.962563] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.550 [2024-04-26 13:39:42.962595] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.550 [2024-04-26 13:39:42.962608] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.550 13:39:42 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.807 [2024-04-26 13:39:43.253177] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.064 13:39:43 -- host/timeout.sh@92 -- # wait 88979 00:29:26.631 [2024-04-26 13:39:43.980857] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:34.750 00:29:34.750 Latency(us) 00:29:34.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.750 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:34.750 Verification LBA range: start 0x0 length 0x4000 00:29:34.750 NVMe0n1 : 10.00 6006.52 23.46 0.00 0.00 21270.41 2115.03 3019898.88 00:29:34.750 =================================================================================================================== 00:29:34.750 Total : 6006.52 23.46 0.00 0.00 21270.41 2115.03 3019898.88 00:29:34.750 0 00:29:34.750 13:39:50 -- host/timeout.sh@97 -- # rpc_pid=89102 00:29:34.750 13:39:50 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:34.750 13:39:50 -- host/timeout.sh@98 -- # sleep 1 00:29:34.750 Running I/O for 10 seconds... 00:29:34.750 13:39:51 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.750 [2024-04-26 13:39:52.093020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093085] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093098] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093107] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093135] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093175] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093311] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093320] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093421] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.093488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119db50 is same with the state(5) to be set 00:29:34.750 [2024-04-26 13:39:52.094244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.750 [2024-04-26 13:39:52.094286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.750 [2024-04-26 13:39:52.094309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.750 [2024-04-26 13:39:52.094321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.750 [2024-04-26 13:39:52.094333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.750 [2024-04-26 13:39:52.094343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.750 [2024-04-26 13:39:52.094356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.750 [2024-04-26 13:39:52.094365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.750 [2024-04-26 13:39:52.094376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.094979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.094992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.751 [2024-04-26 13:39:52.095151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.751 [2024-04-26 13:39:52.095173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.751 [2024-04-26 13:39:52.095194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.751 [2024-04-26 13:39:52.095215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.751 [2024-04-26 13:39:52.095226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.751 [2024-04-26 13:39:52.095236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.095976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.095991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.096003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.096012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.752 [2024-04-26 13:39:52.096023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.752 [2024-04-26 13:39:52.096033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.753 [2024-04-26 13:39:52.096464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80232 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80240 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80248 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80256 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80272 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80288 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80296 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80304 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80312 len:8 PRP1 0x0 PRP2 0x0 00:29:34.753 [2024-04-26 13:39:52.096910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.753 [2024-04-26 13:39:52.096919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.753 [2024-04-26 13:39:52.096926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.753 [2024-04-26 13:39:52.096934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.096942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.096951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.096962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.096979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80328 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.096990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.096999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80336 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.097037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.097052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.097077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.097087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80352 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.097116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.097127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80360 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.097152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.097161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80368 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.097187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.097202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80376 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.097228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.097237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80384 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.097261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.097271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.097278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.097292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80392 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80400 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79696 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79704 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79712 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79720 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79728 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.754 [2024-04-26 13:39:52.105574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:34.754 [2024-04-26 13:39:52.105584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:34.754 [2024-04-26 13:39:52.105595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:29:34.754 [2024-04-26 13:39:52.105607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.755 [2024-04-26 13:39:52.105683] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7c1660 was disconnected and freed. reset controller. 00:29:34.755 [2024-04-26 13:39:52.105832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.755 [2024-04-26 13:39:52.105856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.755 [2024-04-26 13:39:52.105872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.755 [2024-04-26 13:39:52.105885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.755 [2024-04-26 13:39:52.105899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.755 [2024-04-26 13:39:52.105911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.755 [2024-04-26 13:39:52.105925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:34.755 [2024-04-26 13:39:52.105937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.755 [2024-04-26 13:39:52.105950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7799f0 is same with the state(5) to be set 00:29:34.755 [2024-04-26 13:39:52.106278] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.755 [2024-04-26 13:39:52.106318] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:34.755 [2024-04-26 13:39:52.106478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.755 [2024-04-26 13:39:52.106547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.755 [2024-04-26 13:39:52.106570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7799f0 with addr=10.0.0.2, port=4420 00:29:34.755 [2024-04-26 13:39:52.106585] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7799f0 is same with the state(5) to be set 00:29:34.755 [2024-04-26 13:39:52.106610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:34.755 [2024-04-26 13:39:52.106631] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.755 [2024-04-26 13:39:52.106643] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.755 [2024-04-26 13:39:52.106657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.755 [2024-04-26 13:39:52.106683] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.755 [2024-04-26 13:39:52.106706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.755 13:39:52 -- host/timeout.sh@101 -- # sleep 3 00:29:35.689 [2024-04-26 13:39:53.106876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.689 [2024-04-26 13:39:53.107202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.689 [2024-04-26 13:39:53.107231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7799f0 with addr=10.0.0.2, port=4420 00:29:35.689 [2024-04-26 13:39:53.107248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7799f0 is same with the state(5) to be set 00:29:35.689 [2024-04-26 13:39:53.107288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:35.689 [2024-04-26 13:39:53.107310] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.689 [2024-04-26 13:39:53.107321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.689 [2024-04-26 13:39:53.107333] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.689 [2024-04-26 13:39:53.107365] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.689 [2024-04-26 13:39:53.107377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.083 [2024-04-26 13:39:54.107551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.083 [2024-04-26 13:39:54.107672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.083 [2024-04-26 13:39:54.107694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7799f0 with addr=10.0.0.2, port=4420 00:29:37.083 [2024-04-26 13:39:54.107711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7799f0 is same with the state(5) to be set 00:29:37.083 [2024-04-26 13:39:54.107744] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:37.083 [2024-04-26 13:39:54.107765] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.083 [2024-04-26 13:39:54.107776] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.083 [2024-04-26 13:39:54.107807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.083 [2024-04-26 13:39:54.107841] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.083 [2024-04-26 13:39:54.107854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.017 [2024-04-26 13:39:55.110923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.017 [2024-04-26 13:39:55.111039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.017 [2024-04-26 13:39:55.111060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7799f0 with addr=10.0.0.2, port=4420 00:29:38.017 [2024-04-26 13:39:55.111075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7799f0 is same with the state(5) to be set 00:29:38.017 [2024-04-26 13:39:55.111344] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7799f0 (9): Bad file descriptor 00:29:38.017 [2024-04-26 13:39:55.111609] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:38.017 [2024-04-26 13:39:55.111624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:38.017 [2024-04-26 13:39:55.111637] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:38.017 [2024-04-26 13:39:55.115437] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:38.017 [2024-04-26 13:39:55.115466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:38.017 13:39:55 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.017 [2024-04-26 13:39:55.390753] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.017 13:39:55 -- host/timeout.sh@103 -- # wait 89102 00:29:38.952 [2024-04-26 13:39:56.147886] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:44.218 00:29:44.218 Latency(us) 00:29:44.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.218 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:44.218 Verification LBA range: start 0x0 length 0x4000 00:29:44.218 NVMe0n1 : 10.01 5127.43 20.03 3643.91 0.00 14556.34 666.53 3035150.89 00:29:44.218 =================================================================================================================== 00:29:44.218 Total : 5127.43 20.03 3643.91 0.00 14556.34 0.00 3035150.89 00:29:44.218 0 00:29:44.218 13:40:00 -- host/timeout.sh@105 -- # killprocess 88945 00:29:44.218 13:40:00 -- common/autotest_common.sh@936 -- # '[' -z 88945 ']' 00:29:44.218 13:40:00 -- common/autotest_common.sh@940 -- # kill -0 88945 00:29:44.218 13:40:00 -- common/autotest_common.sh@941 -- # uname 00:29:44.218 13:40:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:44.218 13:40:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88945 00:29:44.218 killing process with pid 88945 00:29:44.218 Received shutdown signal, test time was about 10.000000 seconds 00:29:44.218 00:29:44.218 Latency(us) 00:29:44.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.218 =================================================================================================================== 00:29:44.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.218 13:40:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:44.218 13:40:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:44.218 13:40:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88945' 00:29:44.218 13:40:00 -- common/autotest_common.sh@955 -- # kill 88945 00:29:44.218 13:40:00 -- common/autotest_common.sh@960 -- # wait 88945 00:29:44.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:44.218 13:40:01 -- host/timeout.sh@110 -- # bdevperf_pid=89228 00:29:44.218 13:40:01 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:44.218 13:40:01 -- host/timeout.sh@112 -- # waitforlisten 89228 /var/tmp/bdevperf.sock 00:29:44.219 13:40:01 -- common/autotest_common.sh@817 -- # '[' -z 89228 ']' 00:29:44.219 13:40:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:44.219 13:40:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:44.219 13:40:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:44.219 13:40:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:44.219 13:40:01 -- common/autotest_common.sh@10 -- # set +x 00:29:44.219 [2024-04-26 13:40:01.301126] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:44.219 [2024-04-26 13:40:01.301246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89228 ] 00:29:44.219 [2024-04-26 13:40:01.443634] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.219 [2024-04-26 13:40:01.574438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.166 13:40:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:45.166 13:40:02 -- common/autotest_common.sh@850 -- # return 0 00:29:45.166 13:40:02 -- host/timeout.sh@116 -- # dtrace_pid=89256 00:29:45.166 13:40:02 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89228 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:45.166 13:40:02 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:45.423 13:40:02 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:45.681 NVMe0n1 00:29:45.681 13:40:02 -- host/timeout.sh@124 -- # rpc_pid=89310 00:29:45.681 13:40:02 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:45.681 13:40:02 -- host/timeout.sh@125 -- # sleep 1 00:29:45.681 Running I/O for 10 seconds... 00:29:46.613 13:40:03 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.875 [2024-04-26 13:40:04.210109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210173] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210229] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210370] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210386] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210477] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210545] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210612] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210628] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210636] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210644] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210692] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210700] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210748] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210757] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210794] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210812] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210882] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210923] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210931] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210940] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210964] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.210992] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211001] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211009] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211095] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211146] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211179] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211186] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.875 [2024-04-26 13:40:04.211244] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.876 [2024-04-26 13:40:04.211253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a1260 is same with the state(5) to be set 00:29:46.876 [2024-04-26 13:40:04.211618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.211990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.211999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.212983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.212993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:119632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.876 [2024-04-26 13:40:04.213186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.876 [2024-04-26 13:40:04.213197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:46.877 [2024-04-26 13:40:04.213664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91368 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.213750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107744 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.213808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.213850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90496 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.213884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63664 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.213917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102992 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.213951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125896 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.213976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.213983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.213991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28432 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.213999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119192 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48416 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53360 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7168 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112664 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59824 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25104 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57184 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84536 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124368 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130904 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106976 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52096 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96504 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.214523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:29:46.877 [2024-04-26 13:40:04.214532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.877 [2024-04-26 13:40:04.214541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.877 [2024-04-26 13:40:04.214548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.877 [2024-04-26 13:40:04.225799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77224 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.225838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.225857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.225865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.225875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21752 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.225885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.225895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.225903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.225911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119816 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.225920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.225933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.225941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.225949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27544 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.225959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.225968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.225975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.225984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32336 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.225993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.226009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.226018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25864 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.226026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.226043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.226051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.226059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.226084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.226092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121744 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.226101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.226117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.226125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12560 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.226150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.226158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110552 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.226167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.226185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.226193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62688 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.226201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:46.878 [2024-04-26 13:40:04.226218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:46.878 [2024-04-26 13:40:04.226227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31056 len:8 PRP1 0x0 PRP2 0x0 00:29:46.878 [2024-04-26 13:40:04.226236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226308] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23be080 was disconnected and freed. reset controller. 00:29:46.878 [2024-04-26 13:40:04.226436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.878 [2024-04-26 13:40:04.226474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.878 [2024-04-26 13:40:04.226509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.878 [2024-04-26 13:40:04.226538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.878 [2024-04-26 13:40:04.226567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.878 [2024-04-26 13:40:04.226582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23549f0 is same with the state(5) to be set 00:29:46.878 [2024-04-26 13:40:04.226895] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.878 [2024-04-26 13:40:04.226935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23549f0 (9): Bad file descriptor 00:29:46.878 [2024-04-26 13:40:04.227058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.878 [2024-04-26 13:40:04.227112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.878 [2024-04-26 13:40:04.227130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23549f0 with addr=10.0.0.2, port=4420 00:29:46.878 [2024-04-26 13:40:04.227141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23549f0 is same with the state(5) to be set 00:29:46.878 [2024-04-26 13:40:04.227159] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23549f0 (9): Bad file descriptor 00:29:46.878 [2024-04-26 13:40:04.227176] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:46.878 [2024-04-26 13:40:04.227186] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:46.878 [2024-04-26 13:40:04.227196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:46.878 [2024-04-26 13:40:04.227217] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:46.878 [2024-04-26 13:40:04.227229] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:46.878 13:40:04 -- host/timeout.sh@128 -- # wait 89310 00:29:49.406 [2024-04-26 13:40:06.227527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.406 [2024-04-26 13:40:06.227668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.406 [2024-04-26 13:40:06.227702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23549f0 with addr=10.0.0.2, port=4420 00:29:49.406 [2024-04-26 13:40:06.227717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23549f0 is same with the state(5) to be set 00:29:49.406 [2024-04-26 13:40:06.227766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23549f0 (9): Bad file descriptor 00:29:49.406 [2024-04-26 13:40:06.227809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.406 [2024-04-26 13:40:06.227822] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.406 [2024-04-26 13:40:06.227834] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.406 [2024-04-26 13:40:06.227878] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.406 [2024-04-26 13:40:06.227894] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.780 [2024-04-26 13:40:08.228081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.780 [2024-04-26 13:40:08.228191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.780 [2024-04-26 13:40:08.228212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23549f0 with addr=10.0.0.2, port=4420 00:29:50.780 [2024-04-26 13:40:08.228228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23549f0 is same with the state(5) to be set 00:29:50.780 [2024-04-26 13:40:08.228259] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23549f0 (9): Bad file descriptor 00:29:50.780 [2024-04-26 13:40:08.228280] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.780 [2024-04-26 13:40:08.228291] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.780 [2024-04-26 13:40:08.228302] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.780 [2024-04-26 13:40:08.228333] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.780 [2024-04-26 13:40:08.228346] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.308 [2024-04-26 13:40:10.228467] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.875 00:29:53.875 Latency(us) 00:29:53.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.875 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:53.875 NVMe0n1 : 8.20 2519.10 9.84 15.61 0.00 50570.79 2383.13 7046430.72 00:29:53.875 =================================================================================================================== 00:29:53.875 Total : 2519.10 9.84 15.61 0.00 50570.79 2383.13 7046430.72 00:29:53.875 0 00:29:53.875 13:40:11 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:53.875 Attaching 5 probes... 00:29:53.875 1400.040529: reset bdev controller NVMe0 00:29:53.875 1400.131920: reconnect bdev controller NVMe0 00:29:53.875 3400.494236: reconnect delay bdev controller NVMe0 00:29:53.875 3400.526248: reconnect bdev controller NVMe0 00:29:53.875 5401.091839: reconnect delay bdev controller NVMe0 00:29:53.875 5401.116707: reconnect bdev controller NVMe0 00:29:53.875 7401.543664: reconnect delay bdev controller NVMe0 00:29:53.875 7401.574719: reconnect bdev controller NVMe0 00:29:53.875 13:40:11 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:53.875 13:40:11 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:53.875 13:40:11 -- host/timeout.sh@136 -- # kill 89256 00:29:53.875 13:40:11 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:53.875 13:40:11 -- host/timeout.sh@139 -- # killprocess 89228 00:29:53.875 13:40:11 -- common/autotest_common.sh@936 -- # '[' -z 89228 ']' 00:29:53.875 13:40:11 -- common/autotest_common.sh@940 -- # kill -0 89228 00:29:53.875 13:40:11 -- common/autotest_common.sh@941 -- # uname 00:29:53.875 13:40:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:53.875 13:40:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89228 00:29:53.875 killing process with pid 89228 00:29:53.875 Received shutdown signal, test time was about 8.260517 seconds 00:29:53.875 00:29:53.875 Latency(us) 00:29:53.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.875 =================================================================================================================== 00:29:53.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.875 13:40:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:29:53.875 13:40:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:29:53.875 13:40:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89228' 00:29:53.875 13:40:11 -- common/autotest_common.sh@955 -- # kill 89228 00:29:53.875 13:40:11 -- common/autotest_common.sh@960 -- # wait 89228 00:29:54.133 13:40:11 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:54.391 13:40:11 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:54.391 13:40:11 -- host/timeout.sh@145 -- # nvmftestfini 00:29:54.391 13:40:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:54.391 13:40:11 -- nvmf/common.sh@117 -- # sync 00:29:54.650 13:40:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.650 13:40:11 -- nvmf/common.sh@120 -- # set +e 00:29:54.650 13:40:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.650 13:40:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.650 rmmod nvme_tcp 00:29:54.650 rmmod nvme_fabrics 00:29:54.650 rmmod nvme_keyring 00:29:54.650 13:40:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.650 13:40:12 -- nvmf/common.sh@124 -- # set -e 00:29:54.650 13:40:12 -- nvmf/common.sh@125 -- # return 0 00:29:54.650 13:40:12 -- nvmf/common.sh@478 -- # '[' -n 88646 ']' 00:29:54.650 13:40:12 -- nvmf/common.sh@479 -- # killprocess 88646 00:29:54.650 13:40:12 -- common/autotest_common.sh@936 -- # '[' -z 88646 ']' 00:29:54.650 13:40:12 -- common/autotest_common.sh@940 -- # kill -0 88646 00:29:54.650 13:40:12 -- common/autotest_common.sh@941 -- # uname 00:29:54.650 13:40:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:54.650 13:40:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88646 00:29:54.650 killing process with pid 88646 00:29:54.650 13:40:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:54.650 13:40:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:54.650 13:40:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88646' 00:29:54.650 13:40:12 -- common/autotest_common.sh@955 -- # kill 88646 00:29:54.650 13:40:12 -- common/autotest_common.sh@960 -- # wait 88646 00:29:54.909 13:40:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:54.909 13:40:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:54.909 13:40:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:54.909 13:40:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.909 13:40:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.909 13:40:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.909 13:40:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:54.909 13:40:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.168 13:40:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:55.168 00:29:55.168 real 0m47.701s 00:29:55.168 user 2m20.261s 00:29:55.168 sys 0m5.231s 00:29:55.168 13:40:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:55.168 ************************************ 00:29:55.168 END TEST nvmf_timeout 00:29:55.168 ************************************ 00:29:55.168 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.168 13:40:12 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:29:55.168 13:40:12 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:29:55.168 13:40:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:55.168 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.168 13:40:12 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:29:55.168 00:29:55.168 real 12m20.056s 00:29:55.168 user 32m36.097s 00:29:55.168 sys 2m50.473s 00:29:55.168 13:40:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:55.168 ************************************ 00:29:55.168 END TEST nvmf_tcp 00:29:55.168 ************************************ 00:29:55.168 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.168 13:40:12 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:29:55.168 13:40:12 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:55.168 13:40:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:55.168 13:40:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:55.168 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.168 ************************************ 00:29:55.168 START TEST spdkcli_nvmf_tcp 00:29:55.168 ************************************ 00:29:55.168 13:40:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:55.426 * Looking for test storage... 00:29:55.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:29:55.426 13:40:12 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:29:55.426 13:40:12 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:29:55.426 13:40:12 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:29:55.426 13:40:12 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:55.426 13:40:12 -- nvmf/common.sh@7 -- # uname -s 00:29:55.426 13:40:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.426 13:40:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.426 13:40:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.426 13:40:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.426 13:40:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.426 13:40:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.426 13:40:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.426 13:40:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.426 13:40:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.426 13:40:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.427 13:40:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:29:55.427 13:40:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:29:55.427 13:40:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.427 13:40:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.427 13:40:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:55.427 13:40:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.427 13:40:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:55.427 13:40:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.427 13:40:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.427 13:40:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.427 13:40:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.427 13:40:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.427 13:40:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.427 13:40:12 -- paths/export.sh@5 -- # export PATH 00:29:55.427 13:40:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.427 13:40:12 -- nvmf/common.sh@47 -- # : 0 00:29:55.427 13:40:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:55.427 13:40:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:55.427 13:40:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.427 13:40:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.427 13:40:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.427 13:40:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:55.427 13:40:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:55.427 13:40:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:55.427 13:40:12 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:55.427 13:40:12 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:55.427 13:40:12 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:55.427 13:40:12 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:55.427 13:40:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:55.427 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.427 13:40:12 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:55.427 13:40:12 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=89543 00:29:55.427 13:40:12 -- spdkcli/common.sh@34 -- # waitforlisten 89543 00:29:55.427 13:40:12 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:55.427 13:40:12 -- common/autotest_common.sh@817 -- # '[' -z 89543 ']' 00:29:55.427 13:40:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.427 13:40:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:55.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.427 13:40:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.427 13:40:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:55.427 13:40:12 -- common/autotest_common.sh@10 -- # set +x 00:29:55.427 [2024-04-26 13:40:12.746655] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:29:55.427 [2024-04-26 13:40:12.746766] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89543 ] 00:29:55.685 [2024-04-26 13:40:12.886665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:55.685 [2024-04-26 13:40:13.021357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.685 [2024-04-26 13:40:13.021370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.262 13:40:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:56.262 13:40:13 -- common/autotest_common.sh@850 -- # return 0 00:29:56.262 13:40:13 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:56.262 13:40:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:56.262 13:40:13 -- common/autotest_common.sh@10 -- # set +x 00:29:56.520 13:40:13 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:56.520 13:40:13 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:56.520 13:40:13 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:56.520 13:40:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:56.520 13:40:13 -- common/autotest_common.sh@10 -- # set +x 00:29:56.520 13:40:13 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:56.520 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:56.520 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:56.520 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:56.520 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:56.520 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:56.520 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:56.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:56.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:56.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:56.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:56.520 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:56.520 ' 00:29:56.779 [2024-04-26 13:40:14.117773] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:59.312 [2024-04-26 13:40:16.360647] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.274 [2024-04-26 13:40:17.629712] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:02.800 [2024-04-26 13:40:20.019272] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:04.703 [2024-04-26 13:40:22.056697] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:06.600 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:06.600 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:06.600 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:06.600 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:06.600 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:06.600 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:06.600 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:06.600 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:06.600 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:06.600 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:06.600 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:06.600 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:06.600 13:40:23 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:06.600 13:40:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:06.600 13:40:23 -- common/autotest_common.sh@10 -- # set +x 00:30:06.600 13:40:23 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:06.600 13:40:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:06.600 13:40:23 -- common/autotest_common.sh@10 -- # set +x 00:30:06.600 13:40:23 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:06.600 13:40:23 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:30:06.859 13:40:24 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:06.859 13:40:24 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:06.859 13:40:24 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:06.859 13:40:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:06.859 13:40:24 -- common/autotest_common.sh@10 -- # set +x 00:30:06.859 13:40:24 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:06.859 13:40:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:06.859 13:40:24 -- common/autotest_common.sh@10 -- # set +x 00:30:06.859 13:40:24 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:06.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:06.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:06.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:06.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:06.859 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:06.859 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:06.859 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:06.859 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:06.859 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:06.859 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:06.859 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:06.859 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:06.859 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:06.859 ' 00:30:13.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:13.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:13.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:13.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:13.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:13.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:13.440 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:13.440 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:13.440 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:13.440 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:13.440 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:13.440 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:13.440 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:13.440 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:13.440 13:40:29 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:13.440 13:40:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:13.440 13:40:29 -- common/autotest_common.sh@10 -- # set +x 00:30:13.440 13:40:29 -- spdkcli/nvmf.sh@90 -- # killprocess 89543 00:30:13.440 13:40:29 -- common/autotest_common.sh@936 -- # '[' -z 89543 ']' 00:30:13.440 13:40:29 -- common/autotest_common.sh@940 -- # kill -0 89543 00:30:13.440 13:40:29 -- common/autotest_common.sh@941 -- # uname 00:30:13.440 13:40:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:13.440 13:40:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89543 00:30:13.440 13:40:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:13.440 killing process with pid 89543 00:30:13.440 13:40:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:13.440 13:40:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89543' 00:30:13.440 13:40:29 -- common/autotest_common.sh@955 -- # kill 89543 00:30:13.440 [2024-04-26 13:40:29.827464] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:13.440 13:40:29 -- common/autotest_common.sh@960 -- # wait 89543 00:30:13.440 13:40:30 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:13.440 13:40:30 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:13.440 13:40:30 -- spdkcli/common.sh@13 -- # '[' -n 89543 ']' 00:30:13.440 13:40:30 -- spdkcli/common.sh@14 -- # killprocess 89543 00:30:13.440 13:40:30 -- common/autotest_common.sh@936 -- # '[' -z 89543 ']' 00:30:13.440 13:40:30 -- common/autotest_common.sh@940 -- # kill -0 89543 00:30:13.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89543) - No such process 00:30:13.440 Process with pid 89543 is not found 00:30:13.440 13:40:30 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89543 is not found' 00:30:13.440 13:40:30 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:13.440 13:40:30 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:13.440 13:40:30 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:13.440 00:30:13.440 real 0m17.515s 00:30:13.440 user 0m37.672s 00:30:13.440 sys 0m0.994s 00:30:13.440 13:40:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:13.440 ************************************ 00:30:13.440 END TEST spdkcli_nvmf_tcp 00:30:13.440 ************************************ 00:30:13.440 13:40:30 -- common/autotest_common.sh@10 -- # set +x 00:30:13.440 13:40:30 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:13.440 13:40:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:13.440 13:40:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:13.440 13:40:30 -- common/autotest_common.sh@10 -- # set +x 00:30:13.440 ************************************ 00:30:13.440 START TEST nvmf_identify_passthru 00:30:13.440 ************************************ 00:30:13.440 13:40:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:13.440 * Looking for test storage... 00:30:13.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:13.440 13:40:30 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:13.440 13:40:30 -- nvmf/common.sh@7 -- # uname -s 00:30:13.440 13:40:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.440 13:40:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.440 13:40:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.440 13:40:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.440 13:40:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.440 13:40:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.440 13:40:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.440 13:40:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.440 13:40:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.440 13:40:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.440 13:40:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:30:13.440 13:40:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:30:13.440 13:40:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.440 13:40:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.440 13:40:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:13.440 13:40:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.440 13:40:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:13.440 13:40:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.440 13:40:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.440 13:40:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.440 13:40:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.440 13:40:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.440 13:40:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.440 13:40:30 -- paths/export.sh@5 -- # export PATH 00:30:13.441 13:40:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.441 13:40:30 -- nvmf/common.sh@47 -- # : 0 00:30:13.441 13:40:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.441 13:40:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.441 13:40:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.441 13:40:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.441 13:40:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.441 13:40:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.441 13:40:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.441 13:40:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.441 13:40:30 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:13.441 13:40:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.441 13:40:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.441 13:40:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.441 13:40:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.441 13:40:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.441 13:40:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.441 13:40:30 -- paths/export.sh@5 -- # export PATH 00:30:13.441 13:40:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.441 13:40:30 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:13.441 13:40:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:13.441 13:40:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.441 13:40:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:13.441 13:40:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:13.441 13:40:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:13.441 13:40:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.441 13:40:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:13.441 13:40:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.441 13:40:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:13.441 13:40:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:13.441 13:40:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:13.441 13:40:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:13.441 13:40:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:13.441 13:40:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:13.441 13:40:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.441 13:40:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.441 13:40:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:13.441 13:40:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:13.441 13:40:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:13.441 13:40:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:13.441 13:40:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:13.441 13:40:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.441 13:40:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:13.441 13:40:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:13.441 13:40:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:13.441 13:40:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:13.441 13:40:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:13.441 13:40:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:13.441 Cannot find device "nvmf_tgt_br" 00:30:13.441 13:40:30 -- nvmf/common.sh@155 -- # true 00:30:13.441 13:40:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:13.441 Cannot find device "nvmf_tgt_br2" 00:30:13.441 13:40:30 -- nvmf/common.sh@156 -- # true 00:30:13.441 13:40:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:13.441 13:40:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:13.441 Cannot find device "nvmf_tgt_br" 00:30:13.441 13:40:30 -- nvmf/common.sh@158 -- # true 00:30:13.441 13:40:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:13.441 Cannot find device "nvmf_tgt_br2" 00:30:13.441 13:40:30 -- nvmf/common.sh@159 -- # true 00:30:13.441 13:40:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:13.441 13:40:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:13.441 13:40:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:13.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.441 13:40:30 -- nvmf/common.sh@162 -- # true 00:30:13.441 13:40:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:13.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:13.441 13:40:30 -- nvmf/common.sh@163 -- # true 00:30:13.441 13:40:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:13.441 13:40:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:13.441 13:40:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:13.441 13:40:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:13.441 13:40:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:13.441 13:40:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:13.441 13:40:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:13.441 13:40:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:13.441 13:40:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:13.441 13:40:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:13.441 13:40:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:13.441 13:40:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:13.441 13:40:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:13.441 13:40:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:13.441 13:40:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:13.441 13:40:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:13.441 13:40:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:13.441 13:40:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:13.441 13:40:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:13.441 13:40:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:13.441 13:40:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:13.441 13:40:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:13.441 13:40:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:13.441 13:40:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:13.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:30:13.442 00:30:13.442 --- 10.0.0.2 ping statistics --- 00:30:13.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.442 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:13.442 13:40:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:13.442 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:13.442 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:30:13.442 00:30:13.442 --- 10.0.0.3 ping statistics --- 00:30:13.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.442 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:30:13.442 13:40:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:13.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:30:13.442 00:30:13.442 --- 10.0.0.1 ping statistics --- 00:30:13.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.442 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:30:13.442 13:40:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.442 13:40:30 -- nvmf/common.sh@422 -- # return 0 00:30:13.442 13:40:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:13.442 13:40:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.442 13:40:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:13.442 13:40:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:13.442 13:40:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.442 13:40:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:13.442 13:40:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:13.442 13:40:30 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:13.442 13:40:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:13.442 13:40:30 -- common/autotest_common.sh@10 -- # set +x 00:30:13.442 13:40:30 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:13.442 13:40:30 -- common/autotest_common.sh@1510 -- # bdfs=() 00:30:13.442 13:40:30 -- common/autotest_common.sh@1510 -- # local bdfs 00:30:13.442 13:40:30 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:30:13.442 13:40:30 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:30:13.442 13:40:30 -- common/autotest_common.sh@1499 -- # bdfs=() 00:30:13.442 13:40:30 -- common/autotest_common.sh@1499 -- # local bdfs 00:30:13.442 13:40:30 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:13.442 13:40:30 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:13.442 13:40:30 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:30:13.442 13:40:30 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:30:13.442 13:40:30 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:13.442 13:40:30 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:30:13.442 13:40:30 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:30:13.442 13:40:30 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:30:13.442 13:40:30 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:30:13.442 13:40:30 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:13.442 13:40:30 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:13.701 13:40:30 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:30:13.701 13:40:30 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:30:13.701 13:40:30 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:13.701 13:40:30 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:13.701 13:40:31 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:30:13.701 13:40:31 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:13.701 13:40:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:13.701 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.960 13:40:31 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:13.960 13:40:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:13.960 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.960 13:40:31 -- target/identify_passthru.sh@31 -- # nvmfpid=90051 00:30:13.960 13:40:31 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:13.960 13:40:31 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:13.960 13:40:31 -- target/identify_passthru.sh@35 -- # waitforlisten 90051 00:30:13.960 13:40:31 -- common/autotest_common.sh@817 -- # '[' -z 90051 ']' 00:30:13.960 13:40:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.960 13:40:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:13.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.960 13:40:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.960 13:40:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:13.960 13:40:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.960 [2024-04-26 13:40:31.237099] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:30:13.960 [2024-04-26 13:40:31.237201] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.960 [2024-04-26 13:40:31.378402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.218 [2024-04-26 13:40:31.509763] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.218 [2024-04-26 13:40:31.509840] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.218 [2024-04-26 13:40:31.509855] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.218 [2024-04-26 13:40:31.509867] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.218 [2024-04-26 13:40:31.509876] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.218 [2024-04-26 13:40:31.510064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.218 [2024-04-26 13:40:31.511203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.218 [2024-04-26 13:40:31.511326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.218 [2024-04-26 13:40:31.511336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.154 13:40:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:15.154 13:40:32 -- common/autotest_common.sh@850 -- # return 0 00:30:15.154 13:40:32 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 [2024-04-26 13:40:32.367193] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 [2024-04-26 13:40:32.381119] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:15.154 13:40:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 13:40:32 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 Nvme0n1 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 [2024-04-26 13:40:32.525066] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:15.154 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.154 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.154 [2024-04-26 13:40:32.532790] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:15.154 [ 00:30:15.154 { 00:30:15.154 "allow_any_host": true, 00:30:15.154 "hosts": [], 00:30:15.154 "listen_addresses": [], 00:30:15.154 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:15.154 "subtype": "Discovery" 00:30:15.154 }, 00:30:15.154 { 00:30:15.154 "allow_any_host": true, 00:30:15.154 "hosts": [], 00:30:15.154 "listen_addresses": [ 00:30:15.154 { 00:30:15.154 "adrfam": "IPv4", 00:30:15.154 "traddr": "10.0.0.2", 00:30:15.154 "transport": "TCP", 00:30:15.154 "trsvcid": "4420", 00:30:15.154 "trtype": "TCP" 00:30:15.154 } 00:30:15.154 ], 00:30:15.154 "max_cntlid": 65519, 00:30:15.154 "max_namespaces": 1, 00:30:15.154 "min_cntlid": 1, 00:30:15.154 "model_number": "SPDK bdev Controller", 00:30:15.154 "namespaces": [ 00:30:15.154 { 00:30:15.154 "bdev_name": "Nvme0n1", 00:30:15.154 "name": "Nvme0n1", 00:30:15.154 "nguid": "56EC2465B2A743298EB8C97F6E0CA189", 00:30:15.154 "nsid": 1, 00:30:15.154 "uuid": "56ec2465-b2a7-4329-8eb8-c97f6e0ca189" 00:30:15.154 } 00:30:15.154 ], 00:30:15.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.154 "serial_number": "SPDK00000000000001", 00:30:15.154 "subtype": "NVMe" 00:30:15.154 } 00:30:15.154 ] 00:30:15.154 13:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.154 13:40:32 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:15.154 13:40:32 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:15.154 13:40:32 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:15.413 13:40:32 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:30:15.413 13:40:32 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:15.413 13:40:32 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:15.413 13:40:32 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:15.673 13:40:32 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:30:15.673 13:40:32 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:30:15.673 13:40:32 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:30:15.673 13:40:32 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.673 13:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:15.673 13:40:32 -- common/autotest_common.sh@10 -- # set +x 00:30:15.673 13:40:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:15.673 13:40:33 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:15.673 13:40:33 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:15.673 13:40:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:15.673 13:40:33 -- nvmf/common.sh@117 -- # sync 00:30:15.944 13:40:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:15.944 13:40:33 -- nvmf/common.sh@120 -- # set +e 00:30:15.944 13:40:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:15.944 13:40:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:15.944 rmmod nvme_tcp 00:30:15.944 rmmod nvme_fabrics 00:30:15.944 rmmod nvme_keyring 00:30:15.944 13:40:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:15.944 13:40:33 -- nvmf/common.sh@124 -- # set -e 00:30:15.944 13:40:33 -- nvmf/common.sh@125 -- # return 0 00:30:15.944 13:40:33 -- nvmf/common.sh@478 -- # '[' -n 90051 ']' 00:30:15.944 13:40:33 -- nvmf/common.sh@479 -- # killprocess 90051 00:30:15.944 13:40:33 -- common/autotest_common.sh@936 -- # '[' -z 90051 ']' 00:30:15.944 13:40:33 -- common/autotest_common.sh@940 -- # kill -0 90051 00:30:15.944 13:40:33 -- common/autotest_common.sh@941 -- # uname 00:30:15.944 13:40:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:15.944 13:40:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90051 00:30:15.944 killing process with pid 90051 00:30:15.944 13:40:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:15.944 13:40:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:15.944 13:40:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90051' 00:30:15.944 13:40:33 -- common/autotest_common.sh@955 -- # kill 90051 00:30:15.944 [2024-04-26 13:40:33.243404] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:15.944 13:40:33 -- common/autotest_common.sh@960 -- # wait 90051 00:30:16.203 13:40:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:16.203 13:40:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:16.203 13:40:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:16.203 13:40:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.203 13:40:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:16.203 13:40:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.203 13:40:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:16.203 13:40:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.203 13:40:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:16.203 00:30:16.203 real 0m3.339s 00:30:16.203 user 0m7.890s 00:30:16.203 sys 0m0.956s 00:30:16.203 13:40:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:16.203 13:40:33 -- common/autotest_common.sh@10 -- # set +x 00:30:16.203 ************************************ 00:30:16.203 END TEST nvmf_identify_passthru 00:30:16.203 ************************************ 00:30:16.203 13:40:33 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:16.203 13:40:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:16.203 13:40:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:16.203 13:40:33 -- common/autotest_common.sh@10 -- # set +x 00:30:16.462 ************************************ 00:30:16.462 START TEST nvmf_dif 00:30:16.462 ************************************ 00:30:16.462 13:40:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:30:16.462 * Looking for test storage... 00:30:16.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:16.462 13:40:33 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:16.462 13:40:33 -- nvmf/common.sh@7 -- # uname -s 00:30:16.462 13:40:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.462 13:40:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.462 13:40:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.462 13:40:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.462 13:40:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.462 13:40:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.462 13:40:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.462 13:40:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.462 13:40:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.462 13:40:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.462 13:40:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:30:16.462 13:40:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:30:16.462 13:40:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.462 13:40:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.462 13:40:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:16.462 13:40:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.462 13:40:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:16.462 13:40:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.462 13:40:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.462 13:40:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.462 13:40:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.462 13:40:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.462 13:40:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.462 13:40:33 -- paths/export.sh@5 -- # export PATH 00:30:16.462 13:40:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.462 13:40:33 -- nvmf/common.sh@47 -- # : 0 00:30:16.462 13:40:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:16.462 13:40:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:16.462 13:40:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.462 13:40:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.462 13:40:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.462 13:40:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:16.462 13:40:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:16.462 13:40:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:16.462 13:40:33 -- target/dif.sh@15 -- # NULL_META=16 00:30:16.462 13:40:33 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:16.462 13:40:33 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:16.462 13:40:33 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:16.462 13:40:33 -- target/dif.sh@135 -- # nvmftestinit 00:30:16.462 13:40:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:16.462 13:40:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.462 13:40:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:16.462 13:40:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:16.462 13:40:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:16.462 13:40:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.462 13:40:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:16.462 13:40:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.462 13:40:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:16.462 13:40:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:16.462 13:40:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:16.462 13:40:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:16.462 13:40:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:16.462 13:40:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:16.462 13:40:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.462 13:40:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.462 13:40:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:16.462 13:40:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:16.462 13:40:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:16.462 13:40:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:16.463 13:40:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:16.463 13:40:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.463 13:40:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:16.463 13:40:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:16.463 13:40:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:16.463 13:40:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:16.463 13:40:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:16.463 13:40:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:16.463 Cannot find device "nvmf_tgt_br" 00:30:16.463 13:40:33 -- nvmf/common.sh@155 -- # true 00:30:16.463 13:40:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:16.463 Cannot find device "nvmf_tgt_br2" 00:30:16.463 13:40:33 -- nvmf/common.sh@156 -- # true 00:30:16.463 13:40:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:16.463 13:40:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:16.463 Cannot find device "nvmf_tgt_br" 00:30:16.463 13:40:33 -- nvmf/common.sh@158 -- # true 00:30:16.463 13:40:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:16.463 Cannot find device "nvmf_tgt_br2" 00:30:16.463 13:40:33 -- nvmf/common.sh@159 -- # true 00:30:16.463 13:40:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:16.725 13:40:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:16.725 13:40:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:16.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:16.725 13:40:33 -- nvmf/common.sh@162 -- # true 00:30:16.725 13:40:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:16.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:16.725 13:40:33 -- nvmf/common.sh@163 -- # true 00:30:16.725 13:40:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:16.725 13:40:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:16.725 13:40:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:16.725 13:40:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:16.725 13:40:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:16.725 13:40:34 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:16.725 13:40:34 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:16.725 13:40:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:16.725 13:40:34 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:16.725 13:40:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:16.725 13:40:34 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:16.725 13:40:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:16.725 13:40:34 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:16.725 13:40:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:16.725 13:40:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:16.725 13:40:34 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:16.725 13:40:34 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:16.725 13:40:34 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:16.725 13:40:34 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:16.725 13:40:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:16.725 13:40:34 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:16.725 13:40:34 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:16.725 13:40:34 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:16.725 13:40:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:16.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:30:16.725 00:30:16.725 --- 10.0.0.2 ping statistics --- 00:30:16.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.725 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:30:16.725 13:40:34 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:16.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:16.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:30:16.725 00:30:16.725 --- 10.0.0.3 ping statistics --- 00:30:16.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.725 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:30:16.725 13:40:34 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:16.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:30:16.984 00:30:16.984 --- 10.0.0.1 ping statistics --- 00:30:16.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.984 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:30:16.984 13:40:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.984 13:40:34 -- nvmf/common.sh@422 -- # return 0 00:30:16.984 13:40:34 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:16.984 13:40:34 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:17.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:17.242 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:17.242 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:17.242 13:40:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.242 13:40:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:17.242 13:40:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:17.242 13:40:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.242 13:40:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:17.242 13:40:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:17.242 13:40:34 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:17.242 13:40:34 -- target/dif.sh@137 -- # nvmfappstart 00:30:17.242 13:40:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:17.242 13:40:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:17.242 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.242 13:40:34 -- nvmf/common.sh@470 -- # nvmfpid=90407 00:30:17.242 13:40:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:17.242 13:40:34 -- nvmf/common.sh@471 -- # waitforlisten 90407 00:30:17.242 13:40:34 -- common/autotest_common.sh@817 -- # '[' -z 90407 ']' 00:30:17.242 13:40:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.242 13:40:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:17.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.242 13:40:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.242 13:40:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:17.242 13:40:34 -- common/autotest_common.sh@10 -- # set +x 00:30:17.242 [2024-04-26 13:40:34.657967] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:30:17.242 [2024-04-26 13:40:34.658074] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.500 [2024-04-26 13:40:34.800826] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.500 [2024-04-26 13:40:34.926732] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.500 [2024-04-26 13:40:34.926823] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.500 [2024-04-26 13:40:34.926840] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.500 [2024-04-26 13:40:34.926851] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.500 [2024-04-26 13:40:34.926861] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.500 [2024-04-26 13:40:34.926900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:18.436 13:40:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:18.436 13:40:35 -- common/autotest_common.sh@850 -- # return 0 00:30:18.436 13:40:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:18.436 13:40:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:18.436 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 13:40:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.436 13:40:35 -- target/dif.sh@139 -- # create_transport 00:30:18.436 13:40:35 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:18.436 13:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.436 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 [2024-04-26 13:40:35.762194] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.436 13:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.436 13:40:35 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:18.436 13:40:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:18.436 13:40:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:18.436 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 ************************************ 00:30:18.436 START TEST fio_dif_1_default 00:30:18.436 ************************************ 00:30:18.436 13:40:35 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:30:18.436 13:40:35 -- target/dif.sh@86 -- # create_subsystems 0 00:30:18.436 13:40:35 -- target/dif.sh@28 -- # local sub 00:30:18.436 13:40:35 -- target/dif.sh@30 -- # for sub in "$@" 00:30:18.436 13:40:35 -- target/dif.sh@31 -- # create_subsystem 0 00:30:18.436 13:40:35 -- target/dif.sh@18 -- # local sub_id=0 00:30:18.436 13:40:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:18.436 13:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.436 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 bdev_null0 00:30:18.436 13:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.436 13:40:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:18.436 13:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.436 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 13:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.436 13:40:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:18.436 13:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.436 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 13:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.436 13:40:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:18.436 13:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:18.436 13:40:35 -- common/autotest_common.sh@10 -- # set +x 00:30:18.436 [2024-04-26 13:40:35.878323] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.695 13:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:18.695 13:40:35 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:18.695 13:40:35 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:18.695 13:40:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:18.695 13:40:35 -- nvmf/common.sh@521 -- # config=() 00:30:18.695 13:40:35 -- nvmf/common.sh@521 -- # local subsystem config 00:30:18.695 13:40:35 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:18.695 13:40:35 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:18.695 { 00:30:18.695 "params": { 00:30:18.695 "name": "Nvme$subsystem", 00:30:18.695 "trtype": "$TEST_TRANSPORT", 00:30:18.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:18.695 "adrfam": "ipv4", 00:30:18.695 "trsvcid": "$NVMF_PORT", 00:30:18.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:18.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:18.695 "hdgst": ${hdgst:-false}, 00:30:18.695 "ddgst": ${ddgst:-false} 00:30:18.695 }, 00:30:18.695 "method": "bdev_nvme_attach_controller" 00:30:18.695 } 00:30:18.695 EOF 00:30:18.695 )") 00:30:18.695 13:40:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.695 13:40:35 -- target/dif.sh@82 -- # gen_fio_conf 00:30:18.695 13:40:35 -- target/dif.sh@54 -- # local file 00:30:18.695 13:40:35 -- target/dif.sh@56 -- # cat 00:30:18.695 13:40:35 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.695 13:40:35 -- nvmf/common.sh@543 -- # cat 00:30:18.695 13:40:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:18.695 13:40:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:18.695 13:40:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:18.695 13:40:35 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:18.695 13:40:35 -- common/autotest_common.sh@1327 -- # shift 00:30:18.695 13:40:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:18.695 13:40:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.695 13:40:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:18.695 13:40:35 -- target/dif.sh@72 -- # (( file <= files )) 00:30:18.695 13:40:35 -- nvmf/common.sh@545 -- # jq . 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:18.695 13:40:35 -- nvmf/common.sh@546 -- # IFS=, 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:18.695 13:40:35 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:18.695 "params": { 00:30:18.695 "name": "Nvme0", 00:30:18.695 "trtype": "tcp", 00:30:18.695 "traddr": "10.0.0.2", 00:30:18.695 "adrfam": "ipv4", 00:30:18.695 "trsvcid": "4420", 00:30:18.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.695 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:18.695 "hdgst": false, 00:30:18.695 "ddgst": false 00:30:18.695 }, 00:30:18.695 "method": "bdev_nvme_attach_controller" 00:30:18.695 }' 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:18.695 13:40:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:18.695 13:40:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:18.695 13:40:35 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:18.695 13:40:35 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:18.695 13:40:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:18.695 13:40:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:18.695 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:18.695 fio-3.35 00:30:18.695 Starting 1 thread 00:30:30.898 00:30:30.898 filename0: (groupid=0, jobs=1): err= 0: pid=90500: Fri Apr 26 13:40:46 2024 00:30:30.898 read: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(113MiB/10001msec) 00:30:30.898 slat (nsec): min=6251, max=64563, avg=8521.63, stdev=3362.79 00:30:30.898 clat (usec): min=405, max=42546, avg=1360.03, stdev=5814.89 00:30:30.898 lat (usec): min=411, max=42556, avg=1368.55, stdev=5814.92 00:30:30.898 clat percentiles (usec): 00:30:30.898 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 474], 20.00th=[ 482], 00:30:30.898 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 510], 00:30:30.898 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 586], 00:30:30.898 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:30:30.898 | 99.99th=[42730] 00:30:30.898 bw ( KiB/s): min= 4864, max=22592, per=100.00%, avg=11604.21, stdev=5205.93, samples=19 00:30:30.899 iops : min= 1216, max= 5648, avg=2901.05, stdev=1301.48, samples=19 00:30:30.899 lat (usec) : 500=45.76%, 750=52.12%, 1000=0.01% 00:30:30.899 lat (msec) : 4=0.01%, 50=2.09% 00:30:30.899 cpu : usr=88.78%, sys=9.56%, ctx=38, majf=0, minf=0 00:30:30.899 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:30.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:30.899 issued rwts: total=28872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:30.899 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:30.899 00:30:30.899 Run status group 0 (all jobs): 00:30:30.899 READ: bw=11.3MiB/s (11.8MB/s), 11.3MiB/s-11.3MiB/s (11.8MB/s-11.8MB/s), io=113MiB (118MB), run=10001-10001msec 00:30:30.899 13:40:46 -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:30.899 13:40:46 -- target/dif.sh@43 -- # local sub 00:30:30.899 13:40:46 -- target/dif.sh@45 -- # for sub in "$@" 00:30:30.899 13:40:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:30.899 13:40:46 -- target/dif.sh@36 -- # local sub_id=0 00:30:30.899 13:40:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:30.899 13:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:46 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 13:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:30.899 13:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:46 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 ************************************ 00:30:30.899 END TEST fio_dif_1_default 00:30:30.899 ************************************ 00:30:30.899 13:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 00:30:30.899 real 0m11.148s 00:30:30.899 user 0m9.597s 00:30:30.899 sys 0m1.292s 00:30:30.899 13:40:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:30.899 13:40:46 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 13:40:47 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:30.899 13:40:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:30.899 13:40:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 ************************************ 00:30:30.899 START TEST fio_dif_1_multi_subsystems 00:30:30.899 ************************************ 00:30:30.899 13:40:47 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:30:30.899 13:40:47 -- target/dif.sh@92 -- # local files=1 00:30:30.899 13:40:47 -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:30.899 13:40:47 -- target/dif.sh@28 -- # local sub 00:30:30.899 13:40:47 -- target/dif.sh@30 -- # for sub in "$@" 00:30:30.899 13:40:47 -- target/dif.sh@31 -- # create_subsystem 0 00:30:30.899 13:40:47 -- target/dif.sh@18 -- # local sub_id=0 00:30:30.899 13:40:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 bdev_null0 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 [2024-04-26 13:40:47.144499] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@30 -- # for sub in "$@" 00:30:30.899 13:40:47 -- target/dif.sh@31 -- # create_subsystem 1 00:30:30.899 13:40:47 -- target/dif.sh@18 -- # local sub_id=1 00:30:30.899 13:40:47 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 bdev_null1 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.899 13:40:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.899 13:40:47 -- common/autotest_common.sh@10 -- # set +x 00:30:30.899 13:40:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.899 13:40:47 -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:30.899 13:40:47 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:30.899 13:40:47 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:30.899 13:40:47 -- nvmf/common.sh@521 -- # config=() 00:30:30.899 13:40:47 -- nvmf/common.sh@521 -- # local subsystem config 00:30:30.899 13:40:47 -- target/dif.sh@82 -- # gen_fio_conf 00:30:30.899 13:40:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:30.899 13:40:47 -- target/dif.sh@54 -- # local file 00:30:30.899 13:40:47 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:30.899 13:40:47 -- target/dif.sh@56 -- # cat 00:30:30.899 13:40:47 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:30.899 13:40:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:30.899 13:40:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:30.899 13:40:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:30.899 { 00:30:30.899 "params": { 00:30:30.899 "name": "Nvme$subsystem", 00:30:30.899 "trtype": "$TEST_TRANSPORT", 00:30:30.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:30.899 "adrfam": "ipv4", 00:30:30.899 "trsvcid": "$NVMF_PORT", 00:30:30.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:30.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:30.899 "hdgst": ${hdgst:-false}, 00:30:30.899 "ddgst": ${ddgst:-false} 00:30:30.899 }, 00:30:30.899 "method": "bdev_nvme_attach_controller" 00:30:30.899 } 00:30:30.899 EOF 00:30:30.899 )") 00:30:30.899 13:40:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:30.899 13:40:47 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:30.899 13:40:47 -- common/autotest_common.sh@1327 -- # shift 00:30:30.899 13:40:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:30.899 13:40:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:30.899 13:40:47 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:30.899 13:40:47 -- target/dif.sh@72 -- # (( file <= files )) 00:30:30.899 13:40:47 -- target/dif.sh@73 -- # cat 00:30:30.899 13:40:47 -- nvmf/common.sh@543 -- # cat 00:30:30.899 13:40:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:30.899 13:40:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:30.899 13:40:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:30.899 13:40:47 -- target/dif.sh@72 -- # (( file++ )) 00:30:30.899 13:40:47 -- target/dif.sh@72 -- # (( file <= files )) 00:30:30.899 13:40:47 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:30.899 13:40:47 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:30.899 { 00:30:30.899 "params": { 00:30:30.899 "name": "Nvme$subsystem", 00:30:30.899 "trtype": "$TEST_TRANSPORT", 00:30:30.899 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:30.899 "adrfam": "ipv4", 00:30:30.899 "trsvcid": "$NVMF_PORT", 00:30:30.899 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:30.899 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:30.899 "hdgst": ${hdgst:-false}, 00:30:30.899 "ddgst": ${ddgst:-false} 00:30:30.899 }, 00:30:30.899 "method": "bdev_nvme_attach_controller" 00:30:30.899 } 00:30:30.899 EOF 00:30:30.899 )") 00:30:30.899 13:40:47 -- nvmf/common.sh@543 -- # cat 00:30:30.899 13:40:47 -- nvmf/common.sh@545 -- # jq . 00:30:30.899 13:40:47 -- nvmf/common.sh@546 -- # IFS=, 00:30:30.899 13:40:47 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:30.899 "params": { 00:30:30.899 "name": "Nvme0", 00:30:30.899 "trtype": "tcp", 00:30:30.899 "traddr": "10.0.0.2", 00:30:30.899 "adrfam": "ipv4", 00:30:30.899 "trsvcid": "4420", 00:30:30.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:30.899 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:30.899 "hdgst": false, 00:30:30.899 "ddgst": false 00:30:30.899 }, 00:30:30.899 "method": "bdev_nvme_attach_controller" 00:30:30.899 },{ 00:30:30.899 "params": { 00:30:30.899 "name": "Nvme1", 00:30:30.899 "trtype": "tcp", 00:30:30.899 "traddr": "10.0.0.2", 00:30:30.899 "adrfam": "ipv4", 00:30:30.899 "trsvcid": "4420", 00:30:30.899 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:30.899 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:30.899 "hdgst": false, 00:30:30.899 "ddgst": false 00:30:30.899 }, 00:30:30.899 "method": "bdev_nvme_attach_controller" 00:30:30.899 }' 00:30:30.899 13:40:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:30.899 13:40:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:30.899 13:40:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:30.899 13:40:47 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:30.900 13:40:47 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:30.900 13:40:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:30.900 13:40:47 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:30.900 13:40:47 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:30.900 13:40:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:30.900 13:40:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:30.900 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:30.900 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:30.900 fio-3.35 00:30:30.900 Starting 2 threads 00:30:40.871 00:30:40.871 filename0: (groupid=0, jobs=1): err= 0: pid=90664: Fri Apr 26 13:40:58 2024 00:30:40.871 read: IOPS=214, BW=859KiB/s (879kB/s)(8592KiB/10007msec) 00:30:40.871 slat (nsec): min=7397, max=59191, avg=11072.44, stdev=7160.68 00:30:40.871 clat (usec): min=458, max=43069, avg=18597.90, stdev=20177.63 00:30:40.871 lat (usec): min=465, max=43084, avg=18608.97, stdev=20177.09 00:30:40.871 clat percentiles (usec): 00:30:40.871 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 490], 20.00th=[ 510], 00:30:40.871 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 906], 60.00th=[40633], 00:30:40.871 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:30:40.871 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:40.871 | 99.99th=[43254] 00:30:40.871 bw ( KiB/s): min= 640, max= 1248, per=50.34%, avg=850.58, stdev=141.18, samples=19 00:30:40.871 iops : min= 160, max= 312, avg=212.63, stdev=35.31, samples=19 00:30:40.871 lat (usec) : 500=15.50%, 750=28.72%, 1000=10.71% 00:30:40.871 lat (msec) : 2=0.74%, 50=44.32% 00:30:40.871 cpu : usr=95.50%, sys=4.06%, ctx=20, majf=0, minf=0 00:30:40.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.871 issued rwts: total=2148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:40.871 filename1: (groupid=0, jobs=1): err= 0: pid=90665: Fri Apr 26 13:40:58 2024 00:30:40.871 read: IOPS=207, BW=830KiB/s (850kB/s)(8304KiB/10007msec) 00:30:40.871 slat (nsec): min=5082, max=66085, avg=11470.07, stdev=7871.28 00:30:40.871 clat (usec): min=456, max=43204, avg=19242.68, stdev=20260.73 00:30:40.871 lat (usec): min=464, max=43218, avg=19254.15, stdev=20260.28 00:30:40.871 clat percentiles (usec): 00:30:40.871 | 1.00th=[ 465], 5.00th=[ 482], 10.00th=[ 490], 20.00th=[ 510], 00:30:40.871 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 898], 60.00th=[40633], 00:30:40.871 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:30:40.871 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:30:40.871 | 99.99th=[43254] 00:30:40.871 bw ( KiB/s): min= 640, max= 1024, per=49.16%, avg=830.32, stdev=118.05, samples=19 00:30:40.871 iops : min= 160, max= 256, avg=207.58, stdev=29.51, samples=19 00:30:40.871 lat (usec) : 500=14.88%, 750=28.08%, 1000=10.50% 00:30:40.871 lat (msec) : 2=0.48%, 4=0.19%, 50=45.86% 00:30:40.871 cpu : usr=95.44%, sys=4.13%, ctx=10, majf=0, minf=0 00:30:40.871 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.871 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.871 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:40.871 00:30:40.871 Run status group 0 (all jobs): 00:30:40.871 READ: bw=1688KiB/s (1729kB/s), 830KiB/s-859KiB/s (850kB/s-879kB/s), io=16.5MiB (17.3MB), run=10007-10007msec 00:30:41.129 13:40:58 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:41.129 13:40:58 -- target/dif.sh@43 -- # local sub 00:30:41.129 13:40:58 -- target/dif.sh@45 -- # for sub in "$@" 00:30:41.129 13:40:58 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:41.129 13:40:58 -- target/dif.sh@36 -- # local sub_id=0 00:30:41.129 13:40:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:41.129 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.129 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.129 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.129 13:40:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:41.129 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.130 13:40:58 -- target/dif.sh@45 -- # for sub in "$@" 00:30:41.130 13:40:58 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:41.130 13:40:58 -- target/dif.sh@36 -- # local sub_id=1 00:30:41.130 13:40:58 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.130 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.130 13:40:58 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:41.130 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 ************************************ 00:30:41.130 END TEST fio_dif_1_multi_subsystems 00:30:41.130 ************************************ 00:30:41.130 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.130 00:30:41.130 real 0m11.242s 00:30:41.130 user 0m19.962s 00:30:41.130 sys 0m1.096s 00:30:41.130 13:40:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 13:40:58 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:41.130 13:40:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:41.130 13:40:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 ************************************ 00:30:41.130 START TEST fio_dif_rand_params 00:30:41.130 ************************************ 00:30:41.130 13:40:58 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:30:41.130 13:40:58 -- target/dif.sh@100 -- # local NULL_DIF 00:30:41.130 13:40:58 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:41.130 13:40:58 -- target/dif.sh@103 -- # NULL_DIF=3 00:30:41.130 13:40:58 -- target/dif.sh@103 -- # bs=128k 00:30:41.130 13:40:58 -- target/dif.sh@103 -- # numjobs=3 00:30:41.130 13:40:58 -- target/dif.sh@103 -- # iodepth=3 00:30:41.130 13:40:58 -- target/dif.sh@103 -- # runtime=5 00:30:41.130 13:40:58 -- target/dif.sh@105 -- # create_subsystems 0 00:30:41.130 13:40:58 -- target/dif.sh@28 -- # local sub 00:30:41.130 13:40:58 -- target/dif.sh@30 -- # for sub in "$@" 00:30:41.130 13:40:58 -- target/dif.sh@31 -- # create_subsystem 0 00:30:41.130 13:40:58 -- target/dif.sh@18 -- # local sub_id=0 00:30:41.130 13:40:58 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:41.130 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 bdev_null0 00:30:41.130 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.130 13:40:58 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:41.130 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.130 13:40:58 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:41.130 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.130 13:40:58 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.130 13:40:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:41.130 13:40:58 -- common/autotest_common.sh@10 -- # set +x 00:30:41.130 [2024-04-26 13:40:58.506447] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.130 13:40:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:41.130 13:40:58 -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:41.130 13:40:58 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:41.130 13:40:58 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.130 13:40:58 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:41.130 13:40:58 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.130 13:40:58 -- nvmf/common.sh@521 -- # config=() 00:30:41.130 13:40:58 -- nvmf/common.sh@521 -- # local subsystem config 00:30:41.130 13:40:58 -- target/dif.sh@82 -- # gen_fio_conf 00:30:41.130 13:40:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:41.130 13:40:58 -- target/dif.sh@54 -- # local file 00:30:41.130 13:40:58 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:41.130 13:40:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:41.130 { 00:30:41.130 "params": { 00:30:41.130 "name": "Nvme$subsystem", 00:30:41.130 "trtype": "$TEST_TRANSPORT", 00:30:41.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.130 "adrfam": "ipv4", 00:30:41.130 "trsvcid": "$NVMF_PORT", 00:30:41.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.130 "hdgst": ${hdgst:-false}, 00:30:41.130 "ddgst": ${ddgst:-false} 00:30:41.130 }, 00:30:41.130 "method": "bdev_nvme_attach_controller" 00:30:41.130 } 00:30:41.130 EOF 00:30:41.130 )") 00:30:41.130 13:40:58 -- target/dif.sh@56 -- # cat 00:30:41.130 13:40:58 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:41.130 13:40:58 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:41.130 13:40:58 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:41.130 13:40:58 -- common/autotest_common.sh@1327 -- # shift 00:30:41.130 13:40:58 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:41.130 13:40:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.130 13:40:58 -- nvmf/common.sh@543 -- # cat 00:30:41.130 13:40:58 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:41.130 13:40:58 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:41.130 13:40:58 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:41.130 13:40:58 -- target/dif.sh@72 -- # (( file <= files )) 00:30:41.130 13:40:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:41.130 13:40:58 -- nvmf/common.sh@545 -- # jq . 00:30:41.130 13:40:58 -- nvmf/common.sh@546 -- # IFS=, 00:30:41.130 13:40:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:41.130 "params": { 00:30:41.130 "name": "Nvme0", 00:30:41.130 "trtype": "tcp", 00:30:41.130 "traddr": "10.0.0.2", 00:30:41.130 "adrfam": "ipv4", 00:30:41.130 "trsvcid": "4420", 00:30:41.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:41.130 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:41.130 "hdgst": false, 00:30:41.130 "ddgst": false 00:30:41.130 }, 00:30:41.130 "method": "bdev_nvme_attach_controller" 00:30:41.130 }' 00:30:41.130 13:40:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:41.130 13:40:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:41.130 13:40:58 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.130 13:40:58 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:41.130 13:40:58 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:41.130 13:40:58 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:41.388 13:40:58 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:41.389 13:40:58 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:41.389 13:40:58 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:41.389 13:40:58 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.389 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:41.389 ... 00:30:41.389 fio-3.35 00:30:41.389 Starting 3 threads 00:30:47.951 00:30:47.951 filename0: (groupid=0, jobs=1): err= 0: pid=90832: Fri Apr 26 13:41:04 2024 00:30:47.951 read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(164MiB/5006msec) 00:30:47.951 slat (nsec): min=7794, max=78160, avg=13491.18, stdev=3709.73 00:30:47.951 clat (usec): min=5973, max=53647, avg=11451.50, stdev=4906.39 00:30:47.951 lat (usec): min=5987, max=53662, avg=11464.99, stdev=4906.34 00:30:47.951 clat percentiles (usec): 00:30:47.951 | 1.00th=[ 8029], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:30:47.951 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:30:47.951 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:30:47.951 | 99.00th=[52167], 99.50th=[52167], 99.90th=[53740], 99.95th=[53740], 00:30:47.952 | 99.99th=[53740] 00:30:47.952 bw ( KiB/s): min=28928, max=36096, per=36.52%, avg=33459.20, stdev=2627.52, samples=10 00:30:47.952 iops : min= 226, max= 282, avg=261.40, stdev=20.53, samples=10 00:30:47.952 lat (msec) : 10=11.31%, 20=87.32%, 100=1.38% 00:30:47.952 cpu : usr=91.61%, sys=6.43%, ctx=33, majf=0, minf=0 00:30:47.952 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.952 issued rwts: total=1309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.952 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:47.952 filename0: (groupid=0, jobs=1): err= 0: pid=90833: Fri Apr 26 13:41:04 2024 00:30:47.952 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(159MiB/5005msec) 00:30:47.952 slat (nsec): min=7533, max=35526, avg=12345.91, stdev=3330.94 00:30:47.952 clat (usec): min=4862, max=54627, avg=11767.33, stdev=2533.85 00:30:47.952 lat (usec): min=4887, max=54654, avg=11779.68, stdev=2534.44 00:30:47.952 clat percentiles (usec): 00:30:47.952 | 1.00th=[ 6325], 5.00th=[ 7832], 10.00th=[ 9765], 20.00th=[10945], 00:30:47.952 | 30.00th=[11469], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:30:47.952 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13435], 00:30:47.952 | 99.00th=[14353], 99.50th=[14615], 99.90th=[52167], 99.95th=[54789], 00:30:47.952 | 99.99th=[54789] 00:30:47.952 bw ( KiB/s): min=31232, max=35584, per=35.51%, avg=32537.60, stdev=1485.14, samples=10 00:30:47.952 iops : min= 244, max= 278, avg=254.20, stdev=11.60, samples=10 00:30:47.952 lat (msec) : 10=10.75%, 20=89.01%, 100=0.24% 00:30:47.952 cpu : usr=92.23%, sys=6.24%, ctx=10, majf=0, minf=0 00:30:47.952 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.952 issued rwts: total=1274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.952 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:47.952 filename0: (groupid=0, jobs=1): err= 0: pid=90834: Fri Apr 26 13:41:04 2024 00:30:47.952 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(125MiB/5004msec) 00:30:47.952 slat (nsec): min=7590, max=47174, avg=12483.78, stdev=5402.82 00:30:47.952 clat (usec): min=8793, max=57516, avg=14984.91, stdev=2875.32 00:30:47.952 lat (usec): min=8805, max=57524, avg=14997.40, stdev=2874.67 00:30:47.952 clat percentiles (usec): 00:30:47.952 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[13042], 20.00th=[14353], 00:30:47.952 | 30.00th=[14746], 40.00th=[15139], 50.00th=[15270], 60.00th=[15533], 00:30:47.952 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:30:47.952 | 99.00th=[17171], 99.50th=[17433], 99.90th=[57410], 99.95th=[57410], 00:30:47.952 | 99.99th=[57410] 00:30:47.952 bw ( KiB/s): min=23040, max=29242, per=27.89%, avg=25554.60, stdev=1769.68, samples=10 00:30:47.952 iops : min= 180, max= 228, avg=199.60, stdev=13.72, samples=10 00:30:47.952 lat (msec) : 10=6.20%, 20=93.50%, 100=0.30% 00:30:47.952 cpu : usr=92.28%, sys=6.28%, ctx=17, majf=0, minf=0 00:30:47.952 IO depths : 1=21.3%, 2=78.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.952 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.952 issued rwts: total=1000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.952 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:47.952 00:30:47.952 Run status group 0 (all jobs): 00:30:47.952 READ: bw=89.5MiB/s (93.8MB/s), 25.0MiB/s-32.7MiB/s (26.2MB/s-34.3MB/s), io=448MiB (470MB), run=5004-5006msec 00:30:47.952 13:41:04 -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:47.952 13:41:04 -- target/dif.sh@43 -- # local sub 00:30:47.952 13:41:04 -- target/dif.sh@45 -- # for sub in "$@" 00:30:47.952 13:41:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:47.952 13:41:04 -- target/dif.sh@36 -- # local sub_id=0 00:30:47.952 13:41:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@109 -- # NULL_DIF=2 00:30:47.952 13:41:04 -- target/dif.sh@109 -- # bs=4k 00:30:47.952 13:41:04 -- target/dif.sh@109 -- # numjobs=8 00:30:47.952 13:41:04 -- target/dif.sh@109 -- # iodepth=16 00:30:47.952 13:41:04 -- target/dif.sh@109 -- # runtime= 00:30:47.952 13:41:04 -- target/dif.sh@109 -- # files=2 00:30:47.952 13:41:04 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:47.952 13:41:04 -- target/dif.sh@28 -- # local sub 00:30:47.952 13:41:04 -- target/dif.sh@30 -- # for sub in "$@" 00:30:47.952 13:41:04 -- target/dif.sh@31 -- # create_subsystem 0 00:30:47.952 13:41:04 -- target/dif.sh@18 -- # local sub_id=0 00:30:47.952 13:41:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 bdev_null0 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 [2024-04-26 13:41:04.607069] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@30 -- # for sub in "$@" 00:30:47.952 13:41:04 -- target/dif.sh@31 -- # create_subsystem 1 00:30:47.952 13:41:04 -- target/dif.sh@18 -- # local sub_id=1 00:30:47.952 13:41:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 bdev_null1 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@30 -- # for sub in "$@" 00:30:47.952 13:41:04 -- target/dif.sh@31 -- # create_subsystem 2 00:30:47.952 13:41:04 -- target/dif.sh@18 -- # local sub_id=2 00:30:47.952 13:41:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 bdev_null2 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:47.952 13:41:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:47.952 13:41:04 -- common/autotest_common.sh@10 -- # set +x 00:30:47.952 13:41:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:47.952 13:41:04 -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:47.952 13:41:04 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:47.952 13:41:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:47.952 13:41:04 -- nvmf/common.sh@521 -- # config=() 00:30:47.952 13:41:04 -- nvmf/common.sh@521 -- # local subsystem config 00:30:47.952 13:41:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.952 13:41:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:47.952 13:41:04 -- target/dif.sh@82 -- # gen_fio_conf 00:30:47.952 13:41:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:47.952 { 00:30:47.952 "params": { 00:30:47.952 "name": "Nvme$subsystem", 00:30:47.952 "trtype": "$TEST_TRANSPORT", 00:30:47.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.952 "adrfam": "ipv4", 00:30:47.952 "trsvcid": "$NVMF_PORT", 00:30:47.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.952 "hdgst": ${hdgst:-false}, 00:30:47.952 "ddgst": ${ddgst:-false} 00:30:47.952 }, 00:30:47.952 "method": "bdev_nvme_attach_controller" 00:30:47.952 } 00:30:47.952 EOF 00:30:47.952 )") 00:30:47.952 13:41:04 -- target/dif.sh@54 -- # local file 00:30:47.952 13:41:04 -- target/dif.sh@56 -- # cat 00:30:47.953 13:41:04 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.953 13:41:04 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:47.953 13:41:04 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:47.953 13:41:04 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:47.953 13:41:04 -- nvmf/common.sh@543 -- # cat 00:30:47.953 13:41:04 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:47.953 13:41:04 -- common/autotest_common.sh@1327 -- # shift 00:30:47.953 13:41:04 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:47.953 13:41:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:47.953 13:41:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:47.953 13:41:04 -- target/dif.sh@72 -- # (( file <= files )) 00:30:47.953 13:41:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:47.953 13:41:04 -- target/dif.sh@73 -- # cat 00:30:47.953 13:41:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:47.953 { 00:30:47.953 "params": { 00:30:47.953 "name": "Nvme$subsystem", 00:30:47.953 "trtype": "$TEST_TRANSPORT", 00:30:47.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.953 "adrfam": "ipv4", 00:30:47.953 "trsvcid": "$NVMF_PORT", 00:30:47.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.953 "hdgst": ${hdgst:-false}, 00:30:47.953 "ddgst": ${ddgst:-false} 00:30:47.953 }, 00:30:47.953 "method": "bdev_nvme_attach_controller" 00:30:47.953 } 00:30:47.953 EOF 00:30:47.953 )") 00:30:47.953 13:41:04 -- nvmf/common.sh@543 -- # cat 00:30:47.953 13:41:04 -- target/dif.sh@72 -- # (( file++ )) 00:30:47.953 13:41:04 -- target/dif.sh@72 -- # (( file <= files )) 00:30:47.953 13:41:04 -- target/dif.sh@73 -- # cat 00:30:47.953 13:41:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:47.953 13:41:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:47.953 { 00:30:47.953 "params": { 00:30:47.953 "name": "Nvme$subsystem", 00:30:47.953 "trtype": "$TEST_TRANSPORT", 00:30:47.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:47.953 "adrfam": "ipv4", 00:30:47.953 "trsvcid": "$NVMF_PORT", 00:30:47.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:47.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:47.953 "hdgst": ${hdgst:-false}, 00:30:47.953 "ddgst": ${ddgst:-false} 00:30:47.953 }, 00:30:47.953 "method": "bdev_nvme_attach_controller" 00:30:47.953 } 00:30:47.953 EOF 00:30:47.953 )") 00:30:47.953 13:41:04 -- nvmf/common.sh@543 -- # cat 00:30:47.953 13:41:04 -- target/dif.sh@72 -- # (( file++ )) 00:30:47.953 13:41:04 -- target/dif.sh@72 -- # (( file <= files )) 00:30:47.953 13:41:04 -- nvmf/common.sh@545 -- # jq . 00:30:47.953 13:41:04 -- nvmf/common.sh@546 -- # IFS=, 00:30:47.953 13:41:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:47.953 "params": { 00:30:47.953 "name": "Nvme0", 00:30:47.953 "trtype": "tcp", 00:30:47.953 "traddr": "10.0.0.2", 00:30:47.953 "adrfam": "ipv4", 00:30:47.953 "trsvcid": "4420", 00:30:47.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:47.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:47.953 "hdgst": false, 00:30:47.953 "ddgst": false 00:30:47.953 }, 00:30:47.953 "method": "bdev_nvme_attach_controller" 00:30:47.953 },{ 00:30:47.953 "params": { 00:30:47.953 "name": "Nvme1", 00:30:47.953 "trtype": "tcp", 00:30:47.953 "traddr": "10.0.0.2", 00:30:47.953 "adrfam": "ipv4", 00:30:47.953 "trsvcid": "4420", 00:30:47.953 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:47.953 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:47.953 "hdgst": false, 00:30:47.953 "ddgst": false 00:30:47.953 }, 00:30:47.953 "method": "bdev_nvme_attach_controller" 00:30:47.953 },{ 00:30:47.953 "params": { 00:30:47.953 "name": "Nvme2", 00:30:47.953 "trtype": "tcp", 00:30:47.953 "traddr": "10.0.0.2", 00:30:47.953 "adrfam": "ipv4", 00:30:47.953 "trsvcid": "4420", 00:30:47.953 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:47.953 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:47.953 "hdgst": false, 00:30:47.953 "ddgst": false 00:30:47.953 }, 00:30:47.953 "method": "bdev_nvme_attach_controller" 00:30:47.953 }' 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:47.953 13:41:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:47.953 13:41:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:47.953 13:41:04 -- common/autotest_common.sh@1331 -- # asan_lib= 00:30:47.953 13:41:04 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:30:47.953 13:41:04 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:47.953 13:41:04 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:47.953 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:47.953 ... 00:30:47.953 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:47.953 ... 00:30:47.953 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:47.953 ... 00:30:47.953 fio-3.35 00:30:47.953 Starting 24 threads 00:31:00.165 00:31:00.165 filename0: (groupid=0, jobs=1): err= 0: pid=90937: Fri Apr 26 13:41:15 2024 00:31:00.165 read: IOPS=238, BW=952KiB/s (975kB/s)(9588KiB/10067msec) 00:31:00.165 slat (usec): min=6, max=8024, avg=18.09, stdev=234.77 00:31:00.165 clat (msec): min=2, max=167, avg=67.00, stdev=23.63 00:31:00.165 lat (msec): min=2, max=167, avg=67.02, stdev=23.63 00:31:00.165 clat percentiles (msec): 00:31:00.165 | 1.00th=[ 5], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 48], 00:31:00.165 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:31:00.165 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:31:00.165 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:31:00.166 | 99.99th=[ 169] 00:31:00.166 bw ( KiB/s): min= 640, max= 1584, per=4.55%, avg=952.30, stdev=202.02, samples=20 00:31:00.166 iops : min= 160, max= 396, avg=238.05, stdev=50.51, samples=20 00:31:00.166 lat (msec) : 4=0.67%, 10=2.67%, 20=0.67%, 50=23.99%, 100=64.71% 00:31:00.166 lat (msec) : 250=7.30% 00:31:00.166 cpu : usr=32.55%, sys=0.78%, ctx=873, majf=0, minf=9 00:31:00.166 IO depths : 1=0.9%, 2=1.9%, 4=7.8%, 8=76.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.166 filename0: (groupid=0, jobs=1): err= 0: pid=90938: Fri Apr 26 13:41:15 2024 00:31:00.166 read: IOPS=247, BW=992KiB/s (1016kB/s)(9956KiB/10039msec) 00:31:00.166 slat (usec): min=6, max=8054, avg=15.86, stdev=180.49 00:31:00.166 clat (msec): min=34, max=153, avg=64.35, stdev=19.20 00:31:00.166 lat (msec): min=34, max=153, avg=64.37, stdev=19.19 00:31:00.166 clat percentiles (msec): 00:31:00.166 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 48], 00:31:00.166 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:31:00.166 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 88], 95.00th=[ 99], 00:31:00.166 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:31:00.166 | 99.99th=[ 155] 00:31:00.166 bw ( KiB/s): min= 688, max= 1248, per=4.73%, avg=990.95, stdev=136.23, samples=20 00:31:00.166 iops : min= 172, max= 312, avg=247.70, stdev=34.00, samples=20 00:31:00.166 lat (msec) : 50=31.66%, 100=63.72%, 250=4.62% 00:31:00.166 cpu : usr=33.99%, sys=0.86%, ctx=929, majf=0, minf=9 00:31:00.166 IO depths : 1=0.5%, 2=1.2%, 4=7.0%, 8=78.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=89.3%, 8=6.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=2489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.166 filename0: (groupid=0, jobs=1): err= 0: pid=90939: Fri Apr 26 13:41:15 2024 00:31:00.166 read: IOPS=192, BW=771KiB/s (790kB/s)(7720KiB/10013msec) 00:31:00.166 slat (usec): min=4, max=8024, avg=15.68, stdev=182.44 00:31:00.166 clat (msec): min=44, max=155, avg=82.80, stdev=20.61 00:31:00.166 lat (msec): min=44, max=155, avg=82.82, stdev=20.61 00:31:00.166 clat percentiles (msec): 00:31:00.166 | 1.00th=[ 47], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 72], 00:31:00.166 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 84], 00:31:00.166 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 121], 00:31:00.166 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:31:00.166 | 99.99th=[ 157] 00:31:00.166 bw ( KiB/s): min= 512, max= 896, per=3.67%, avg=769.15, stdev=95.21, samples=20 00:31:00.166 iops : min= 128, max= 224, avg=192.20, stdev=23.79, samples=20 00:31:00.166 lat (msec) : 50=4.40%, 100=79.84%, 250=15.75% 00:31:00.166 cpu : usr=32.32%, sys=0.90%, ctx=872, majf=0, minf=9 00:31:00.166 IO depths : 1=3.1%, 2=7.2%, 4=18.1%, 8=62.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.166 filename0: (groupid=0, jobs=1): err= 0: pid=90940: Fri Apr 26 13:41:15 2024 00:31:00.166 read: IOPS=234, BW=937KiB/s (959kB/s)(9396KiB/10031msec) 00:31:00.166 slat (usec): min=7, max=7023, avg=14.86, stdev=166.74 00:31:00.166 clat (msec): min=26, max=150, avg=68.17, stdev=21.39 00:31:00.166 lat (msec): min=26, max=150, avg=68.18, stdev=21.38 00:31:00.166 clat percentiles (msec): 00:31:00.166 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 49], 00:31:00.166 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 64], 60.00th=[ 72], 00:31:00.166 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 107], 00:31:00.166 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:31:00.166 | 99.99th=[ 150] 00:31:00.166 bw ( KiB/s): min= 696, max= 1200, per=4.46%, avg=933.05, stdev=132.13, samples=20 00:31:00.166 iops : min= 174, max= 300, avg=233.25, stdev=33.01, samples=20 00:31:00.166 lat (msec) : 50=22.78%, 100=70.33%, 250=6.90% 00:31:00.166 cpu : usr=40.39%, sys=1.29%, ctx=1252, majf=0, minf=9 00:31:00.166 IO depths : 1=1.7%, 2=3.9%, 4=11.7%, 8=70.7%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.166 filename0: (groupid=0, jobs=1): err= 0: pid=90941: Fri Apr 26 13:41:15 2024 00:31:00.166 read: IOPS=213, BW=855KiB/s (875kB/s)(8572KiB/10026msec) 00:31:00.166 slat (usec): min=5, max=8025, avg=21.87, stdev=299.57 00:31:00.166 clat (msec): min=27, max=167, avg=74.71, stdev=23.83 00:31:00.166 lat (msec): min=27, max=167, avg=74.73, stdev=23.83 00:31:00.166 clat percentiles (msec): 00:31:00.166 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 52], 00:31:00.166 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:31:00.166 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 00:31:00.166 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 167], 00:31:00.166 | 99.99th=[ 167] 00:31:00.166 bw ( KiB/s): min= 560, max= 1152, per=4.06%, avg=850.75, stdev=147.95, samples=20 00:31:00.166 iops : min= 140, max= 288, avg=212.65, stdev=37.00, samples=20 00:31:00.166 lat (msec) : 50=18.11%, 100=68.92%, 250=12.97% 00:31:00.166 cpu : usr=32.17%, sys=0.95%, ctx=878, majf=0, minf=9 00:31:00.166 IO depths : 1=1.6%, 2=3.5%, 4=12.0%, 8=71.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.166 filename0: (groupid=0, jobs=1): err= 0: pid=90942: Fri Apr 26 13:41:15 2024 00:31:00.166 read: IOPS=247, BW=988KiB/s (1012kB/s)(9936KiB/10053msec) 00:31:00.166 slat (usec): min=7, max=8027, avg=22.44, stdev=263.77 00:31:00.166 clat (msec): min=5, max=135, avg=64.44, stdev=20.10 00:31:00.166 lat (msec): min=5, max=135, avg=64.46, stdev=20.11 00:31:00.166 clat percentiles (msec): 00:31:00.166 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 49], 00:31:00.166 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 63], 60.00th=[ 70], 00:31:00.166 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 97], 00:31:00.166 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:31:00.166 | 99.99th=[ 136] 00:31:00.166 bw ( KiB/s): min= 768, max= 1536, per=4.71%, avg=987.20, stdev=186.69, samples=20 00:31:00.166 iops : min= 192, max= 384, avg=246.80, stdev=46.67, samples=20 00:31:00.166 lat (msec) : 10=1.93%, 20=0.64%, 50=20.21%, 100=73.23%, 250=3.99% 00:31:00.166 cpu : usr=42.30%, sys=1.27%, ctx=1383, majf=0, minf=9 00:31:00.166 IO depths : 1=1.0%, 2=2.1%, 4=8.5%, 8=75.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.166 filename0: (groupid=0, jobs=1): err= 0: pid=90943: Fri Apr 26 13:41:15 2024 00:31:00.166 read: IOPS=194, BW=776KiB/s (795kB/s)(7768KiB/10009msec) 00:31:00.166 slat (usec): min=4, max=8025, avg=25.01, stdev=296.27 00:31:00.166 clat (msec): min=35, max=157, avg=82.28, stdev=19.47 00:31:00.166 lat (msec): min=35, max=157, avg=82.31, stdev=19.47 00:31:00.166 clat percentiles (msec): 00:31:00.166 | 1.00th=[ 47], 5.00th=[ 53], 10.00th=[ 59], 20.00th=[ 70], 00:31:00.166 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 79], 60.00th=[ 84], 00:31:00.166 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 121], 00:31:00.166 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 159], 99.95th=[ 159], 00:31:00.166 | 99.99th=[ 159] 00:31:00.166 bw ( KiB/s): min= 552, max= 896, per=3.66%, avg=767.53, stdev=92.41, samples=19 00:31:00.166 iops : min= 138, max= 224, avg=191.84, stdev=23.12, samples=19 00:31:00.166 lat (msec) : 50=4.27%, 100=79.20%, 250=16.53% 00:31:00.166 cpu : usr=37.74%, sys=0.99%, ctx=1109, majf=0, minf=9 00:31:00.166 IO depths : 1=2.7%, 2=6.4%, 4=17.1%, 8=63.6%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=1942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.166 filename0: (groupid=0, jobs=1): err= 0: pid=90944: Fri Apr 26 13:41:15 2024 00:31:00.166 read: IOPS=214, BW=859KiB/s (879kB/s)(8588KiB/10002msec) 00:31:00.166 slat (usec): min=7, max=4019, avg=17.28, stdev=157.07 00:31:00.166 clat (msec): min=34, max=134, avg=74.41, stdev=20.20 00:31:00.166 lat (msec): min=34, max=134, avg=74.42, stdev=20.20 00:31:00.166 clat percentiles (msec): 00:31:00.166 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:31:00.166 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 78], 00:31:00.166 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 109], 00:31:00.166 | 99.00th=[ 127], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:31:00.166 | 99.99th=[ 136] 00:31:00.166 bw ( KiB/s): min= 640, max= 1088, per=4.08%, avg=854.26, stdev=128.97, samples=19 00:31:00.166 iops : min= 160, max= 272, avg=213.53, stdev=32.24, samples=19 00:31:00.166 lat (msec) : 50=13.83%, 100=74.38%, 250=11.78% 00:31:00.166 cpu : usr=44.02%, sys=1.18%, ctx=1300, majf=0, minf=9 00:31:00.166 IO depths : 1=2.1%, 2=4.7%, 4=13.1%, 8=69.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:00.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 complete : 0=0.0%, 4=90.9%, 8=4.1%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.166 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.166 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90945: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=194, BW=777KiB/s (796kB/s)(7784KiB/10012msec) 00:31:00.167 slat (usec): min=5, max=8024, avg=27.47, stdev=362.90 00:31:00.167 clat (msec): min=11, max=168, avg=82.13, stdev=21.55 00:31:00.167 lat (msec): min=11, max=168, avg=82.15, stdev=21.54 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 32], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 69], 00:31:00.167 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:31:00.167 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 120], 00:31:00.167 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:31:00.167 | 99.99th=[ 169] 00:31:00.167 bw ( KiB/s): min= 640, max= 896, per=3.68%, avg=771.37, stdev=82.34, samples=19 00:31:00.167 iops : min= 160, max= 224, avg=192.84, stdev=20.58, samples=19 00:31:00.167 lat (msec) : 20=0.82%, 50=3.80%, 100=76.36%, 250=19.01% 00:31:00.167 cpu : usr=36.06%, sys=1.14%, ctx=1051, majf=0, minf=9 00:31:00.167 IO depths : 1=3.3%, 2=7.0%, 4=17.7%, 8=62.6%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 complete : 0=0.0%, 4=91.9%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90946: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=198, BW=793KiB/s (812kB/s)(7948KiB/10021msec) 00:31:00.167 slat (nsec): min=4789, max=31708, avg=10439.81, stdev=3607.70 00:31:00.167 clat (msec): min=35, max=184, avg=80.62, stdev=22.22 00:31:00.167 lat (msec): min=35, max=184, avg=80.63, stdev=22.22 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:31:00.167 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 82], 00:31:00.167 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 111], 95.00th=[ 121], 00:31:00.167 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 186], 99.95th=[ 186], 00:31:00.167 | 99.99th=[ 186] 00:31:00.167 bw ( KiB/s): min= 600, max= 1072, per=3.76%, avg=787.80, stdev=103.03, samples=20 00:31:00.167 iops : min= 150, max= 268, avg=196.90, stdev=25.78, samples=20 00:31:00.167 lat (msec) : 50=8.56%, 100=72.92%, 250=18.52% 00:31:00.167 cpu : usr=33.81%, sys=0.99%, ctx=959, majf=0, minf=9 00:31:00.167 IO depths : 1=2.3%, 2=5.0%, 4=14.8%, 8=67.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90947: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=239, BW=956KiB/s (979kB/s)(9604KiB/10042msec) 00:31:00.167 slat (usec): min=4, max=8021, avg=22.36, stdev=266.43 00:31:00.167 clat (msec): min=21, max=143, avg=66.76, stdev=20.03 00:31:00.167 lat (msec): min=21, max=143, avg=66.78, stdev=20.04 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 29], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 49], 00:31:00.167 | 30.00th=[ 54], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 72], 00:31:00.167 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 108], 00:31:00.167 | 99.00th=[ 121], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:31:00.167 | 99.99th=[ 144] 00:31:00.167 bw ( KiB/s): min= 736, max= 1142, per=4.55%, avg=953.35, stdev=128.94, samples=20 00:31:00.167 iops : min= 184, max= 285, avg=238.30, stdev=32.18, samples=20 00:31:00.167 lat (msec) : 50=23.62%, 100=69.72%, 250=6.66% 00:31:00.167 cpu : usr=37.70%, sys=1.13%, ctx=1309, majf=0, minf=9 00:31:00.167 IO depths : 1=1.1%, 2=2.7%, 4=9.5%, 8=74.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:31:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 issued rwts: total=2401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90948: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=253, BW=1016KiB/s (1040kB/s)(9.97MiB/10051msec) 00:31:00.167 slat (usec): min=5, max=4037, avg=13.61, stdev=112.64 00:31:00.167 clat (msec): min=4, max=148, avg=62.77, stdev=21.09 00:31:00.167 lat (msec): min=4, max=148, avg=62.79, stdev=21.09 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 8], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 48], 00:31:00.167 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 67], 00:31:00.167 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 90], 95.00th=[ 104], 00:31:00.167 | 99.00th=[ 126], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 148], 00:31:00.167 | 99.99th=[ 148] 00:31:00.167 bw ( KiB/s): min= 512, max= 1664, per=4.84%, avg=1014.40, stdev=215.11, samples=20 00:31:00.167 iops : min= 128, max= 416, avg=253.60, stdev=53.78, samples=20 00:31:00.167 lat (msec) : 10=2.43%, 20=0.08%, 50=28.33%, 100=63.32%, 250=5.84% 00:31:00.167 cpu : usr=42.63%, sys=1.41%, ctx=1304, majf=0, minf=9 00:31:00.167 IO depths : 1=1.5%, 2=3.4%, 4=11.8%, 8=71.7%, 16=11.6%, 32=0.0%, >=64=0.0% 00:31:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 complete : 0=0.0%, 4=90.4%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 issued rwts: total=2552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90949: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=232, BW=929KiB/s (951kB/s)(9308KiB/10023msec) 00:31:00.167 slat (nsec): min=7467, max=58633, avg=10771.67, stdev=4823.86 00:31:00.167 clat (msec): min=32, max=191, avg=68.85, stdev=22.17 00:31:00.167 lat (msec): min=32, max=191, avg=68.86, stdev=22.17 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 50], 00:31:00.167 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 71], 00:31:00.167 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 112], 00:31:00.167 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:31:00.167 | 99.99th=[ 192] 00:31:00.167 bw ( KiB/s): min= 504, max= 1277, per=4.41%, avg=924.25, stdev=195.65, samples=20 00:31:00.167 iops : min= 126, max= 319, avg=231.05, stdev=48.89, samples=20 00:31:00.167 lat (msec) : 50=21.96%, 100=68.93%, 250=9.11% 00:31:00.167 cpu : usr=44.14%, sys=1.34%, ctx=1300, majf=0, minf=9 00:31:00.167 IO depths : 1=1.2%, 2=2.5%, 4=9.8%, 8=74.4%, 16=12.2%, 32=0.0%, >=64=0.0% 00:31:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90950: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=226, BW=905KiB/s (927kB/s)(9092KiB/10041msec) 00:31:00.167 slat (usec): min=6, max=4021, avg=12.28, stdev=84.24 00:31:00.167 clat (msec): min=32, max=142, avg=70.58, stdev=21.07 00:31:00.167 lat (msec): min=32, max=143, avg=70.59, stdev=21.07 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 36], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 52], 00:31:00.167 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 75], 00:31:00.167 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 109], 00:31:00.167 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:31:00.167 | 99.99th=[ 144] 00:31:00.167 bw ( KiB/s): min= 766, max= 1200, per=4.31%, avg=902.75, stdev=131.94, samples=20 00:31:00.167 iops : min= 191, max= 300, avg=225.65, stdev=33.02, samples=20 00:31:00.167 lat (msec) : 50=18.26%, 100=71.54%, 250=10.21% 00:31:00.167 cpu : usr=41.31%, sys=1.11%, ctx=1312, majf=0, minf=9 00:31:00.167 IO depths : 1=1.0%, 2=2.2%, 4=9.1%, 8=74.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:31:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 complete : 0=0.0%, 4=89.9%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 issued rwts: total=2273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90951: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=227, BW=909KiB/s (931kB/s)(9128KiB/10042msec) 00:31:00.167 slat (usec): min=6, max=4023, avg=12.60, stdev=84.08 00:31:00.167 clat (msec): min=27, max=168, avg=70.29, stdev=20.71 00:31:00.167 lat (msec): min=27, max=168, avg=70.31, stdev=20.71 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 52], 00:31:00.167 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 73], 00:31:00.167 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:31:00.167 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 169], 99.95th=[ 169], 00:31:00.167 | 99.99th=[ 169] 00:31:00.167 bw ( KiB/s): min= 688, max= 1184, per=4.33%, avg=906.30, stdev=141.59, samples=20 00:31:00.167 iops : min= 172, max= 296, avg=226.55, stdev=35.39, samples=20 00:31:00.167 lat (msec) : 50=18.80%, 100=74.28%, 250=6.92% 00:31:00.167 cpu : usr=38.05%, sys=1.18%, ctx=1057, majf=0, minf=9 00:31:00.167 IO depths : 1=1.5%, 2=3.0%, 4=11.3%, 8=72.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:00.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.167 issued rwts: total=2282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.167 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.167 filename1: (groupid=0, jobs=1): err= 0: pid=90952: Fri Apr 26 13:41:15 2024 00:31:00.167 read: IOPS=234, BW=938KiB/s (960kB/s)(9436KiB/10064msec) 00:31:00.167 slat (nsec): min=6222, max=63819, avg=10751.19, stdev=4237.00 00:31:00.167 clat (msec): min=7, max=170, avg=68.18, stdev=22.67 00:31:00.167 lat (msec): min=7, max=170, avg=68.20, stdev=22.67 00:31:00.167 clat percentiles (msec): 00:31:00.167 | 1.00th=[ 16], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:31:00.167 | 30.00th=[ 55], 40.00th=[ 59], 50.00th=[ 66], 60.00th=[ 72], 00:31:00.167 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 115], 00:31:00.167 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 171], 99.95th=[ 171], 00:31:00.167 | 99.99th=[ 171] 00:31:00.167 bw ( KiB/s): min= 600, max= 1304, per=4.47%, avg=936.85, stdev=168.14, samples=20 00:31:00.167 iops : min= 150, max= 326, avg=234.20, stdev=42.02, samples=20 00:31:00.167 lat (msec) : 10=0.30%, 20=1.06%, 50=18.61%, 100=71.56%, 250=8.48% 00:31:00.167 cpu : usr=39.10%, sys=1.22%, ctx=1334, majf=0, minf=9 00:31:00.168 IO depths : 1=1.4%, 2=3.1%, 4=12.3%, 8=71.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=2359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90953: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=229, BW=917KiB/s (939kB/s)(9212KiB/10042msec) 00:31:00.168 slat (usec): min=7, max=4028, avg=12.04, stdev=83.81 00:31:00.168 clat (msec): min=26, max=143, avg=69.70, stdev=20.80 00:31:00.168 lat (msec): min=26, max=143, avg=69.71, stdev=20.81 00:31:00.168 clat percentiles (msec): 00:31:00.168 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:31:00.168 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:31:00.168 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 112], 00:31:00.168 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:31:00.168 | 99.99th=[ 144] 00:31:00.168 bw ( KiB/s): min= 640, max= 1248, per=4.37%, avg=914.65, stdev=144.84, samples=20 00:31:00.168 iops : min= 160, max= 312, avg=228.65, stdev=36.20, samples=20 00:31:00.168 lat (msec) : 50=22.75%, 100=69.43%, 250=7.82% 00:31:00.168 cpu : usr=34.79%, sys=1.09%, ctx=1005, majf=0, minf=9 00:31:00.168 IO depths : 1=0.7%, 2=1.6%, 4=7.9%, 8=76.8%, 16=13.1%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=2303,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90954: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=203, BW=814KiB/s (833kB/s)(8148KiB/10013msec) 00:31:00.168 slat (usec): min=4, max=8023, avg=24.13, stdev=319.80 00:31:00.168 clat (msec): min=35, max=168, avg=78.48, stdev=20.87 00:31:00.168 lat (msec): min=35, max=168, avg=78.51, stdev=20.87 00:31:00.168 clat percentiles (msec): 00:31:00.168 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 61], 00:31:00.168 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 78], 00:31:00.168 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:31:00.168 | 99.00th=[ 136], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:31:00.168 | 99.99th=[ 169] 00:31:00.168 bw ( KiB/s): min= 640, max= 1040, per=3.86%, avg=808.30, stdev=118.29, samples=20 00:31:00.168 iops : min= 160, max= 260, avg=202.05, stdev=29.59, samples=20 00:31:00.168 lat (msec) : 50=8.20%, 100=77.17%, 250=14.63% 00:31:00.168 cpu : usr=35.59%, sys=0.99%, ctx=885, majf=0, minf=9 00:31:00.168 IO depths : 1=2.1%, 2=4.7%, 4=13.5%, 8=68.4%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=91.0%, 8=4.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90955: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=199, BW=796KiB/s (816kB/s)(7972KiB/10010msec) 00:31:00.168 slat (usec): min=4, max=8020, avg=16.43, stdev=200.68 00:31:00.168 clat (msec): min=37, max=190, avg=80.22, stdev=23.06 00:31:00.168 lat (msec): min=37, max=190, avg=80.24, stdev=23.07 00:31:00.168 clat percentiles (msec): 00:31:00.168 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 62], 00:31:00.168 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:31:00.168 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 130], 00:31:00.168 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 190], 99.95th=[ 190], 00:31:00.168 | 99.99th=[ 190] 00:31:00.168 bw ( KiB/s): min= 472, max= 1045, per=3.77%, avg=790.65, stdev=133.07, samples=20 00:31:00.168 iops : min= 118, max= 261, avg=197.65, stdev=33.24, samples=20 00:31:00.168 lat (msec) : 50=7.33%, 100=76.27%, 250=16.41% 00:31:00.168 cpu : usr=34.25%, sys=0.96%, ctx=968, majf=0, minf=9 00:31:00.168 IO depths : 1=2.5%, 2=5.2%, 4=14.2%, 8=67.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90956: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=203, BW=816KiB/s (835kB/s)(8160KiB/10006msec) 00:31:00.168 slat (usec): min=4, max=4024, avg=13.03, stdev=88.97 00:31:00.168 clat (msec): min=13, max=160, avg=78.41, stdev=21.00 00:31:00.168 lat (msec): min=13, max=160, avg=78.42, stdev=21.00 00:31:00.168 clat percentiles (msec): 00:31:00.168 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 62], 00:31:00.168 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 81], 00:31:00.168 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 115], 00:31:00.168 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 161], 99.95th=[ 161], 00:31:00.168 | 99.99th=[ 161] 00:31:00.168 bw ( KiB/s): min= 592, max= 1000, per=3.82%, avg=800.00, stdev=98.12, samples=19 00:31:00.168 iops : min= 148, max= 250, avg=200.00, stdev=24.53, samples=19 00:31:00.168 lat (msec) : 20=0.29%, 50=7.06%, 100=79.51%, 250=13.14% 00:31:00.168 cpu : usr=40.09%, sys=1.10%, ctx=1269, majf=0, minf=9 00:31:00.168 IO depths : 1=1.3%, 2=2.9%, 4=10.6%, 8=72.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=90.4%, 8=5.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90957: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=196, BW=786KiB/s (804kB/s)(7864KiB/10010msec) 00:31:00.168 slat (usec): min=4, max=8021, avg=24.66, stdev=325.52 00:31:00.168 clat (msec): min=20, max=192, avg=81.32, stdev=22.25 00:31:00.168 lat (msec): min=20, max=192, avg=81.34, stdev=22.26 00:31:00.168 clat percentiles (msec): 00:31:00.168 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 70], 00:31:00.168 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:31:00.168 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 120], 00:31:00.168 | 99.00th=[ 155], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:31:00.168 | 99.99th=[ 192] 00:31:00.168 bw ( KiB/s): min= 472, max= 992, per=3.73%, avg=780.63, stdev=108.49, samples=19 00:31:00.168 iops : min= 118, max= 248, avg=195.16, stdev=27.12, samples=19 00:31:00.168 lat (msec) : 50=7.58%, 100=75.99%, 250=16.43% 00:31:00.168 cpu : usr=37.10%, sys=1.12%, ctx=975, majf=0, minf=9 00:31:00.168 IO depths : 1=3.2%, 2=6.8%, 4=16.5%, 8=63.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=1966,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90958: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=188, BW=753KiB/s (771kB/s)(7536KiB/10006msec) 00:31:00.168 slat (usec): min=4, max=8018, avg=16.59, stdev=206.41 00:31:00.168 clat (msec): min=10, max=178, avg=84.86, stdev=21.15 00:31:00.168 lat (msec): min=10, max=178, avg=84.88, stdev=21.15 00:31:00.168 clat percentiles (msec): 00:31:00.168 | 1.00th=[ 39], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 72], 00:31:00.168 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 84], 00:31:00.168 | 70.00th=[ 94], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 128], 00:31:00.168 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 180], 00:31:00.168 | 99.99th=[ 180] 00:31:00.168 bw ( KiB/s): min= 584, max= 896, per=3.55%, avg=743.58, stdev=80.29, samples=19 00:31:00.168 iops : min= 146, max= 224, avg=185.89, stdev=20.07, samples=19 00:31:00.168 lat (msec) : 20=0.32%, 50=2.28%, 100=77.97%, 250=19.43% 00:31:00.168 cpu : usr=36.37%, sys=1.08%, ctx=1020, majf=0, minf=9 00:31:00.168 IO depths : 1=3.4%, 2=7.4%, 4=18.5%, 8=61.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=92.2%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90959: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=234, BW=939KiB/s (961kB/s)(9420KiB/10033msec) 00:31:00.168 slat (usec): min=4, max=4022, avg=14.09, stdev=116.94 00:31:00.168 clat (msec): min=32, max=161, avg=68.04, stdev=20.08 00:31:00.168 lat (msec): min=32, max=161, avg=68.05, stdev=20.08 00:31:00.168 clat percentiles (msec): 00:31:00.168 | 1.00th=[ 38], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:31:00.168 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 65], 60.00th=[ 71], 00:31:00.168 | 70.00th=[ 74], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 107], 00:31:00.168 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 161], 99.95th=[ 161], 00:31:00.168 | 99.99th=[ 161] 00:31:00.168 bw ( KiB/s): min= 722, max= 1232, per=4.47%, avg=935.00, stdev=154.95, samples=20 00:31:00.168 iops : min= 180, max= 308, avg=233.70, stdev=38.81, samples=20 00:31:00.168 lat (msec) : 50=19.53%, 100=72.61%, 250=7.86% 00:31:00.168 cpu : usr=44.41%, sys=1.02%, ctx=1312, majf=0, minf=9 00:31:00.168 IO depths : 1=1.7%, 2=3.5%, 4=10.8%, 8=72.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:31:00.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.168 issued rwts: total=2355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.168 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.168 filename2: (groupid=0, jobs=1): err= 0: pid=90960: Fri Apr 26 13:41:15 2024 00:31:00.168 read: IOPS=209, BW=839KiB/s (859kB/s)(8416KiB/10027msec) 00:31:00.168 slat (usec): min=7, max=8020, avg=16.86, stdev=194.17 00:31:00.168 clat (msec): min=36, max=149, avg=76.10, stdev=19.81 00:31:00.168 lat (msec): min=36, max=149, avg=76.12, stdev=19.81 00:31:00.168 clat percentiles (msec): 00:31:00.169 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 59], 00:31:00.169 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:31:00.169 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 112], 00:31:00.169 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 150], 99.95th=[ 150], 00:31:00.169 | 99.99th=[ 150] 00:31:00.169 bw ( KiB/s): min= 640, max= 992, per=3.99%, avg=835.20, stdev=100.37, samples=20 00:31:00.169 iops : min= 160, max= 248, avg=208.80, stdev=25.09, samples=20 00:31:00.169 lat (msec) : 50=10.65%, 100=77.95%, 250=11.41% 00:31:00.169 cpu : usr=36.59%, sys=1.06%, ctx=1116, majf=0, minf=9 00:31:00.169 IO depths : 1=1.7%, 2=3.7%, 4=12.1%, 8=70.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:31:00.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.169 complete : 0=0.0%, 4=90.4%, 8=5.0%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.169 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.169 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:00.169 00:31:00.169 Run status group 0 (all jobs): 00:31:00.169 READ: bw=20.4MiB/s (21.4MB/s), 753KiB/s-1016KiB/s (771kB/s-1040kB/s), io=206MiB (216MB), run=10002-10067msec 00:31:00.169 13:41:16 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:00.169 13:41:16 -- target/dif.sh@43 -- # local sub 00:31:00.169 13:41:16 -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.169 13:41:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:00.169 13:41:16 -- target/dif.sh@36 -- # local sub_id=0 00:31:00.169 13:41:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.169 13:41:16 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:00.169 13:41:16 -- target/dif.sh@36 -- # local sub_id=1 00:31:00.169 13:41:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.169 13:41:16 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:00.169 13:41:16 -- target/dif.sh@36 -- # local sub_id=2 00:31:00.169 13:41:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:00.169 13:41:16 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:00.169 13:41:16 -- target/dif.sh@115 -- # numjobs=2 00:31:00.169 13:41:16 -- target/dif.sh@115 -- # iodepth=8 00:31:00.169 13:41:16 -- target/dif.sh@115 -- # runtime=5 00:31:00.169 13:41:16 -- target/dif.sh@115 -- # files=1 00:31:00.169 13:41:16 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:00.169 13:41:16 -- target/dif.sh@28 -- # local sub 00:31:00.169 13:41:16 -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.169 13:41:16 -- target/dif.sh@31 -- # create_subsystem 0 00:31:00.169 13:41:16 -- target/dif.sh@18 -- # local sub_id=0 00:31:00.169 13:41:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 bdev_null0 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 [2024-04-26 13:41:16.104394] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.169 13:41:16 -- target/dif.sh@31 -- # create_subsystem 1 00:31:00.169 13:41:16 -- target/dif.sh@18 -- # local sub_id=1 00:31:00.169 13:41:16 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 bdev_null1 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.169 13:41:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:00.169 13:41:16 -- common/autotest_common.sh@10 -- # set +x 00:31:00.169 13:41:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:00.169 13:41:16 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:00.169 13:41:16 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:00.169 13:41:16 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:00.169 13:41:16 -- nvmf/common.sh@521 -- # config=() 00:31:00.169 13:41:16 -- nvmf/common.sh@521 -- # local subsystem config 00:31:00.169 13:41:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:00.169 13:41:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:00.169 { 00:31:00.169 "params": { 00:31:00.169 "name": "Nvme$subsystem", 00:31:00.169 "trtype": "$TEST_TRANSPORT", 00:31:00.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.169 "adrfam": "ipv4", 00:31:00.169 "trsvcid": "$NVMF_PORT", 00:31:00.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.169 "hdgst": ${hdgst:-false}, 00:31:00.169 "ddgst": ${ddgst:-false} 00:31:00.169 }, 00:31:00.169 "method": "bdev_nvme_attach_controller" 00:31:00.169 } 00:31:00.169 EOF 00:31:00.169 )") 00:31:00.169 13:41:16 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.169 13:41:16 -- target/dif.sh@82 -- # gen_fio_conf 00:31:00.169 13:41:16 -- target/dif.sh@54 -- # local file 00:31:00.169 13:41:16 -- target/dif.sh@56 -- # cat 00:31:00.169 13:41:16 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.169 13:41:16 -- nvmf/common.sh@543 -- # cat 00:31:00.169 13:41:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:00.169 13:41:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.169 13:41:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:00.169 13:41:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:00.169 13:41:16 -- common/autotest_common.sh@1327 -- # shift 00:31:00.169 13:41:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:00.169 13:41:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.169 13:41:16 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:00.169 13:41:16 -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.169 13:41:16 -- target/dif.sh@73 -- # cat 00:31:00.169 13:41:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:00.169 13:41:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:00.169 { 00:31:00.169 "params": { 00:31:00.169 "name": "Nvme$subsystem", 00:31:00.169 "trtype": "$TEST_TRANSPORT", 00:31:00.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.169 "adrfam": "ipv4", 00:31:00.169 "trsvcid": "$NVMF_PORT", 00:31:00.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.169 "hdgst": ${hdgst:-false}, 00:31:00.169 "ddgst": ${ddgst:-false} 00:31:00.169 }, 00:31:00.169 "method": "bdev_nvme_attach_controller" 00:31:00.169 } 00:31:00.169 EOF 00:31:00.169 )") 00:31:00.169 13:41:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:00.169 13:41:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:00.169 13:41:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:00.169 13:41:16 -- nvmf/common.sh@543 -- # cat 00:31:00.169 13:41:16 -- target/dif.sh@72 -- # (( file++ )) 00:31:00.169 13:41:16 -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.169 13:41:16 -- nvmf/common.sh@545 -- # jq . 00:31:00.169 13:41:16 -- nvmf/common.sh@546 -- # IFS=, 00:31:00.169 13:41:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:00.170 "params": { 00:31:00.170 "name": "Nvme0", 00:31:00.170 "trtype": "tcp", 00:31:00.170 "traddr": "10.0.0.2", 00:31:00.170 "adrfam": "ipv4", 00:31:00.170 "trsvcid": "4420", 00:31:00.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.170 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.170 "hdgst": false, 00:31:00.170 "ddgst": false 00:31:00.170 }, 00:31:00.170 "method": "bdev_nvme_attach_controller" 00:31:00.170 },{ 00:31:00.170 "params": { 00:31:00.170 "name": "Nvme1", 00:31:00.170 "trtype": "tcp", 00:31:00.170 "traddr": "10.0.0.2", 00:31:00.170 "adrfam": "ipv4", 00:31:00.170 "trsvcid": "4420", 00:31:00.170 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:00.170 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:00.170 "hdgst": false, 00:31:00.170 "ddgst": false 00:31:00.170 }, 00:31:00.170 "method": "bdev_nvme_attach_controller" 00:31:00.170 }' 00:31:00.170 13:41:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:00.170 13:41:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:00.170 13:41:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.170 13:41:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:00.170 13:41:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:00.170 13:41:16 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:00.170 13:41:16 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:00.170 13:41:16 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:00.170 13:41:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:00.170 13:41:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.170 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:00.170 ... 00:31:00.170 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:00.170 ... 00:31:00.170 fio-3.35 00:31:00.170 Starting 4 threads 00:31:05.457 00:31:05.457 filename0: (groupid=0, jobs=1): err= 0: pid=91092: Fri Apr 26 13:41:22 2024 00:31:05.457 read: IOPS=1918, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5001msec) 00:31:05.457 slat (nsec): min=4914, max=55872, avg=14086.75, stdev=4140.17 00:31:05.457 clat (usec): min=1112, max=7712, avg=4100.50, stdev=193.86 00:31:05.457 lat (usec): min=1120, max=7726, avg=4114.59, stdev=193.85 00:31:05.457 clat percentiles (usec): 00:31:05.457 | 1.00th=[ 3949], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4047], 00:31:05.457 | 30.00th=[ 4080], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4113], 00:31:05.457 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4178], 00:31:05.457 | 99.00th=[ 4293], 99.50th=[ 5080], 99.90th=[ 6259], 99.95th=[ 6325], 00:31:05.457 | 99.99th=[ 7701] 00:31:05.457 bw ( KiB/s): min=15104, max=15360, per=24.97%, avg=15317.33, stdev=90.51, samples=9 00:31:05.457 iops : min= 1888, max= 1920, avg=1914.67, stdev=11.31, samples=9 00:31:05.457 lat (msec) : 2=0.09%, 4=2.30%, 10=97.60% 00:31:05.457 cpu : usr=93.38%, sys=5.40%, ctx=59, majf=0, minf=9 00:31:05.457 IO depths : 1=11.6%, 2=25.0%, 4=50.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.457 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.457 issued rwts: total=9592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:05.457 filename0: (groupid=0, jobs=1): err= 0: pid=91093: Fri Apr 26 13:41:22 2024 00:31:05.457 read: IOPS=1916, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5001msec) 00:31:05.457 slat (nsec): min=4840, max=37894, avg=12507.79, stdev=4612.48 00:31:05.457 clat (usec): min=3065, max=6113, avg=4117.19, stdev=123.05 00:31:05.457 lat (usec): min=3077, max=6144, avg=4129.70, stdev=122.48 00:31:05.457 clat percentiles (usec): 00:31:05.457 | 1.00th=[ 3982], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4080], 00:31:05.457 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4113], 00:31:05.457 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4228], 00:31:05.457 | 99.00th=[ 4293], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 6063], 00:31:05.457 | 99.99th=[ 6128] 00:31:05.457 bw ( KiB/s): min=15104, max=15360, per=24.97%, avg=15317.33, stdev=90.51, samples=9 00:31:05.457 iops : min= 1888, max= 1920, avg=1914.67, stdev=11.31, samples=9 00:31:05.457 lat (msec) : 4=1.57%, 10=98.43% 00:31:05.457 cpu : usr=93.68%, sys=5.06%, ctx=8, majf=0, minf=0 00:31:05.457 IO depths : 1=12.0%, 2=25.0%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.457 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.457 issued rwts: total=9584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:05.457 filename1: (groupid=0, jobs=1): err= 0: pid=91094: Fri Apr 26 13:41:22 2024 00:31:05.457 read: IOPS=1919, BW=15.0MiB/s (15.7MB/s)(75.0MiB/5004msec) 00:31:05.457 slat (nsec): min=4709, max=37650, avg=8633.98, stdev=2155.04 00:31:05.457 clat (usec): min=1254, max=4769, avg=4122.01, stdev=135.62 00:31:05.457 lat (usec): min=1267, max=4776, avg=4130.64, stdev=135.67 00:31:05.457 clat percentiles (usec): 00:31:05.457 | 1.00th=[ 3982], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:31:05.457 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:31:05.457 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4228], 00:31:05.457 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4621], 99.95th=[ 4621], 00:31:05.457 | 99.99th=[ 4752] 00:31:05.457 bw ( KiB/s): min=15232, max=15488, per=25.03%, avg=15360.00, stdev=80.18, samples=10 00:31:05.457 iops : min= 1904, max= 1936, avg=1920.00, stdev=10.02, samples=10 00:31:05.457 lat (msec) : 2=0.17%, 4=1.45%, 10=98.39% 00:31:05.457 cpu : usr=93.84%, sys=4.98%, ctx=11, majf=0, minf=0 00:31:05.457 IO depths : 1=11.2%, 2=25.0%, 4=50.0%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.457 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.457 issued rwts: total=9605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:05.457 filename1: (groupid=0, jobs=1): err= 0: pid=91095: Fri Apr 26 13:41:22 2024 00:31:05.457 read: IOPS=1917, BW=15.0MiB/s (15.7MB/s)(75.0MiB/5003msec) 00:31:05.457 slat (nsec): min=3830, max=60886, avg=14727.17, stdev=4080.18 00:31:05.457 clat (usec): min=2189, max=5265, avg=4103.41, stdev=127.28 00:31:05.457 lat (usec): min=2196, max=5279, avg=4118.14, stdev=127.15 00:31:05.457 clat percentiles (usec): 00:31:05.458 | 1.00th=[ 3621], 5.00th=[ 4015], 10.00th=[ 4015], 20.00th=[ 4047], 00:31:05.458 | 30.00th=[ 4080], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4113], 00:31:05.458 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4178], 95.00th=[ 4228], 00:31:05.458 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5014], 99.95th=[ 5014], 00:31:05.458 | 99.99th=[ 5276] 00:31:05.458 bw ( KiB/s): min=15232, max=15488, per=25.00%, avg=15339.20, stdev=96.61, samples=10 00:31:05.458 iops : min= 1904, max= 1936, avg=1917.40, stdev=12.08, samples=10 00:31:05.458 lat (msec) : 4=3.41%, 10=96.59% 00:31:05.458 cpu : usr=94.22%, sys=4.56%, ctx=10, majf=0, minf=0 00:31:05.458 IO depths : 1=9.2%, 2=19.2%, 4=55.8%, 8=15.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.458 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.458 issued rwts: total=9595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.458 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:05.458 00:31:05.458 Run status group 0 (all jobs): 00:31:05.458 READ: bw=59.9MiB/s (62.8MB/s), 15.0MiB/s-15.0MiB/s (15.7MB/s-15.7MB/s), io=300MiB (314MB), run=5001-5004msec 00:31:05.458 13:41:22 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:05.458 13:41:22 -- target/dif.sh@43 -- # local sub 00:31:05.458 13:41:22 -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.458 13:41:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:05.458 13:41:22 -- target/dif.sh@36 -- # local sub_id=0 00:31:05.458 13:41:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 13:41:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 13:41:22 -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.458 13:41:22 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:05.458 13:41:22 -- target/dif.sh@36 -- # local sub_id=1 00:31:05.458 13:41:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 13:41:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 ************************************ 00:31:05.458 END TEST fio_dif_rand_params 00:31:05.458 ************************************ 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 00:31:05.458 real 0m23.847s 00:31:05.458 user 2m6.224s 00:31:05.458 sys 0m5.448s 00:31:05.458 13:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 13:41:22 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:05.458 13:41:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:05.458 13:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 ************************************ 00:31:05.458 START TEST fio_dif_digest 00:31:05.458 ************************************ 00:31:05.458 13:41:22 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:31:05.458 13:41:22 -- target/dif.sh@123 -- # local NULL_DIF 00:31:05.458 13:41:22 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:05.458 13:41:22 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:05.458 13:41:22 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:05.458 13:41:22 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:05.458 13:41:22 -- target/dif.sh@127 -- # numjobs=3 00:31:05.458 13:41:22 -- target/dif.sh@127 -- # iodepth=3 00:31:05.458 13:41:22 -- target/dif.sh@127 -- # runtime=10 00:31:05.458 13:41:22 -- target/dif.sh@128 -- # hdgst=true 00:31:05.458 13:41:22 -- target/dif.sh@128 -- # ddgst=true 00:31:05.458 13:41:22 -- target/dif.sh@130 -- # create_subsystems 0 00:31:05.458 13:41:22 -- target/dif.sh@28 -- # local sub 00:31:05.458 13:41:22 -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.458 13:41:22 -- target/dif.sh@31 -- # create_subsystem 0 00:31:05.458 13:41:22 -- target/dif.sh@18 -- # local sub_id=0 00:31:05.458 13:41:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 bdev_null0 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 13:41:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 13:41:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 13:41:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.458 13:41:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:05.458 13:41:22 -- common/autotest_common.sh@10 -- # set +x 00:31:05.458 [2024-04-26 13:41:22.467887] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.458 13:41:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:05.458 13:41:22 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:05.458 13:41:22 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:05.458 13:41:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:05.458 13:41:22 -- nvmf/common.sh@521 -- # config=() 00:31:05.458 13:41:22 -- nvmf/common.sh@521 -- # local subsystem config 00:31:05.458 13:41:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:05.458 13:41:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:05.458 { 00:31:05.458 "params": { 00:31:05.458 "name": "Nvme$subsystem", 00:31:05.458 "trtype": "$TEST_TRANSPORT", 00:31:05.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.458 "adrfam": "ipv4", 00:31:05.458 "trsvcid": "$NVMF_PORT", 00:31:05.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.458 "hdgst": ${hdgst:-false}, 00:31:05.458 "ddgst": ${ddgst:-false} 00:31:05.458 }, 00:31:05.458 "method": "bdev_nvme_attach_controller" 00:31:05.458 } 00:31:05.458 EOF 00:31:05.458 )") 00:31:05.458 13:41:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.458 13:41:22 -- target/dif.sh@82 -- # gen_fio_conf 00:31:05.458 13:41:22 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.458 13:41:22 -- target/dif.sh@54 -- # local file 00:31:05.458 13:41:22 -- target/dif.sh@56 -- # cat 00:31:05.458 13:41:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:05.458 13:41:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.458 13:41:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:05.459 13:41:22 -- nvmf/common.sh@543 -- # cat 00:31:05.459 13:41:22 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.459 13:41:22 -- common/autotest_common.sh@1327 -- # shift 00:31:05.459 13:41:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:05.459 13:41:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.459 13:41:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:05.459 13:41:22 -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:05.459 13:41:22 -- nvmf/common.sh@545 -- # jq . 00:31:05.459 13:41:22 -- nvmf/common.sh@546 -- # IFS=, 00:31:05.459 13:41:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:05.459 "params": { 00:31:05.459 "name": "Nvme0", 00:31:05.459 "trtype": "tcp", 00:31:05.459 "traddr": "10.0.0.2", 00:31:05.459 "adrfam": "ipv4", 00:31:05.459 "trsvcid": "4420", 00:31:05.459 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.459 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.459 "hdgst": true, 00:31:05.459 "ddgst": true 00:31:05.459 }, 00:31:05.459 "method": "bdev_nvme_attach_controller" 00:31:05.459 }' 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:05.459 13:41:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:05.459 13:41:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:05.459 13:41:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:05.459 13:41:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:05.459 13:41:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:05.459 13:41:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.459 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:05.459 ... 00:31:05.459 fio-3.35 00:31:05.459 Starting 3 threads 00:31:17.666 00:31:17.666 filename0: (groupid=0, jobs=1): err= 0: pid=91205: Fri Apr 26 13:41:33 2024 00:31:17.666 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(303MiB/10006msec) 00:31:17.666 slat (nsec): min=7350, max=35232, avg=12613.63, stdev=2533.72 00:31:17.666 clat (usec): min=8772, max=53152, avg=12367.22, stdev=2122.23 00:31:17.666 lat (usec): min=8784, max=53164, avg=12379.84, stdev=2122.19 00:31:17.666 clat percentiles (usec): 00:31:17.666 | 1.00th=[10552], 5.00th=[11076], 10.00th=[11338], 20.00th=[11731], 00:31:17.666 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:31:17.666 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13435], 00:31:17.666 | 99.00th=[13960], 99.50th=[14353], 99.90th=[52691], 99.95th=[53216], 00:31:17.666 | 99.99th=[53216] 00:31:17.666 bw ( KiB/s): min=27904, max=32000, per=38.71%, avg=31016.42, stdev=924.06, samples=19 00:31:17.666 iops : min= 218, max= 250, avg=242.32, stdev= 7.22, samples=19 00:31:17.666 lat (msec) : 10=0.21%, 20=99.55%, 100=0.25% 00:31:17.666 cpu : usr=91.77%, sys=6.72%, ctx=14, majf=0, minf=9 00:31:17.666 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.666 issued rwts: total=2424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.666 filename0: (groupid=0, jobs=1): err= 0: pid=91206: Fri Apr 26 13:41:33 2024 00:31:17.666 read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(215MiB/10005msec) 00:31:17.666 slat (nsec): min=7828, max=37148, avg=13036.29, stdev=2835.44 00:31:17.666 clat (usec): min=9910, max=20879, avg=17451.85, stdev=1054.80 00:31:17.666 lat (usec): min=9920, max=20893, avg=17464.88, stdev=1054.95 00:31:17.666 clat percentiles (usec): 00:31:17.666 | 1.00th=[12125], 5.00th=[16057], 10.00th=[16450], 20.00th=[16909], 00:31:17.666 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:31:17.666 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18482], 95.00th=[18744], 00:31:17.666 | 99.00th=[19268], 99.50th=[19530], 99.90th=[20317], 99.95th=[20841], 00:31:17.666 | 99.99th=[20841] 00:31:17.666 bw ( KiB/s): min=21248, max=23296, per=27.45%, avg=21991.37, stdev=460.32, samples=19 00:31:17.666 iops : min= 166, max= 182, avg=171.79, stdev= 3.58, samples=19 00:31:17.666 lat (msec) : 10=0.06%, 20=99.83%, 50=0.12% 00:31:17.666 cpu : usr=93.36%, sys=5.36%, ctx=12, majf=0, minf=9 00:31:17.666 IO depths : 1=2.7%, 2=97.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.666 issued rwts: total=1718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.666 filename0: (groupid=0, jobs=1): err= 0: pid=91207: Fri Apr 26 13:41:33 2024 00:31:17.666 read: IOPS=211, BW=26.5MiB/s (27.8MB/s)(265MiB/10005msec) 00:31:17.666 slat (nsec): min=7280, max=49075, avg=13056.38, stdev=3850.75 00:31:17.666 clat (usec): min=7156, max=17967, avg=14132.52, stdev=1196.97 00:31:17.666 lat (usec): min=7167, max=17976, avg=14145.57, stdev=1196.81 00:31:17.666 clat percentiles (usec): 00:31:17.666 | 1.00th=[ 8848], 5.00th=[12387], 10.00th=[12911], 20.00th=[13304], 00:31:17.666 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:31:17.666 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:31:17.666 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:31:17.666 | 99.99th=[17957] 00:31:17.666 bw ( KiB/s): min=26368, max=29184, per=33.89%, avg=27152.21, stdev=676.63, samples=19 00:31:17.666 iops : min= 206, max= 228, avg=212.11, stdev= 5.31, samples=19 00:31:17.666 lat (msec) : 10=1.27%, 20=98.73% 00:31:17.666 cpu : usr=92.16%, sys=6.26%, ctx=10, majf=0, minf=0 00:31:17.666 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:17.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:17.666 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:17.666 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:17.666 00:31:17.666 Run status group 0 (all jobs): 00:31:17.666 READ: bw=78.2MiB/s (82.0MB/s), 21.5MiB/s-30.3MiB/s (22.5MB/s-31.8MB/s), io=783MiB (821MB), run=10005-10006msec 00:31:17.666 13:41:33 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:17.666 13:41:33 -- target/dif.sh@43 -- # local sub 00:31:17.666 13:41:33 -- target/dif.sh@45 -- # for sub in "$@" 00:31:17.666 13:41:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:17.666 13:41:33 -- target/dif.sh@36 -- # local sub_id=0 00:31:17.666 13:41:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.666 13:41:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.666 13:41:33 -- common/autotest_common.sh@10 -- # set +x 00:31:17.666 13:41:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.666 13:41:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:17.666 13:41:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:17.666 13:41:33 -- common/autotest_common.sh@10 -- # set +x 00:31:17.666 ************************************ 00:31:17.666 END TEST fio_dif_digest 00:31:17.666 ************************************ 00:31:17.666 13:41:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:17.666 00:31:17.666 real 0m11.050s 00:31:17.666 user 0m28.438s 00:31:17.666 sys 0m2.111s 00:31:17.666 13:41:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:17.666 13:41:33 -- common/autotest_common.sh@10 -- # set +x 00:31:17.666 13:41:33 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:17.666 13:41:33 -- target/dif.sh@147 -- # nvmftestfini 00:31:17.666 13:41:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:17.666 13:41:33 -- nvmf/common.sh@117 -- # sync 00:31:17.666 13:41:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:17.666 13:41:33 -- nvmf/common.sh@120 -- # set +e 00:31:17.666 13:41:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:17.666 13:41:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:17.666 rmmod nvme_tcp 00:31:17.666 rmmod nvme_fabrics 00:31:17.666 rmmod nvme_keyring 00:31:17.666 13:41:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:17.666 13:41:33 -- nvmf/common.sh@124 -- # set -e 00:31:17.666 13:41:33 -- nvmf/common.sh@125 -- # return 0 00:31:17.666 13:41:33 -- nvmf/common.sh@478 -- # '[' -n 90407 ']' 00:31:17.666 13:41:33 -- nvmf/common.sh@479 -- # killprocess 90407 00:31:17.666 13:41:33 -- common/autotest_common.sh@936 -- # '[' -z 90407 ']' 00:31:17.666 13:41:33 -- common/autotest_common.sh@940 -- # kill -0 90407 00:31:17.666 13:41:33 -- common/autotest_common.sh@941 -- # uname 00:31:17.666 13:41:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:17.666 13:41:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90407 00:31:17.666 killing process with pid 90407 00:31:17.666 13:41:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:17.666 13:41:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:17.666 13:41:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90407' 00:31:17.666 13:41:33 -- common/autotest_common.sh@955 -- # kill 90407 00:31:17.666 13:41:33 -- common/autotest_common.sh@960 -- # wait 90407 00:31:17.666 13:41:33 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:17.666 13:41:33 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:17.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:17.666 Waiting for block devices as requested 00:31:17.666 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:17.666 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:17.666 13:41:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:17.666 13:41:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:17.666 13:41:34 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.666 13:41:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:17.666 13:41:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.666 13:41:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:17.666 13:41:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.666 13:41:34 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:17.666 00:31:17.666 real 1m0.884s 00:31:17.666 user 3m52.496s 00:31:17.666 sys 0m15.791s 00:31:17.666 13:41:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:17.666 13:41:34 -- common/autotest_common.sh@10 -- # set +x 00:31:17.666 ************************************ 00:31:17.666 END TEST nvmf_dif 00:31:17.666 ************************************ 00:31:17.666 13:41:34 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:17.666 13:41:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:17.666 13:41:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:17.666 13:41:34 -- common/autotest_common.sh@10 -- # set +x 00:31:17.667 ************************************ 00:31:17.667 START TEST nvmf_abort_qd_sizes 00:31:17.667 ************************************ 00:31:17.667 13:41:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:17.667 * Looking for test storage... 00:31:17.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:17.667 13:41:34 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:17.667 13:41:34 -- nvmf/common.sh@7 -- # uname -s 00:31:17.667 13:41:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:17.667 13:41:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:17.667 13:41:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:17.667 13:41:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:17.667 13:41:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:17.667 13:41:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:17.667 13:41:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:17.667 13:41:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:17.667 13:41:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:17.667 13:41:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:17.667 13:41:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:31:17.667 13:41:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:31:17.667 13:41:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:17.667 13:41:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:17.667 13:41:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:17.667 13:41:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:17.667 13:41:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:17.667 13:41:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:17.667 13:41:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:17.667 13:41:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:17.667 13:41:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.667 13:41:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.667 13:41:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.667 13:41:34 -- paths/export.sh@5 -- # export PATH 00:31:17.667 13:41:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:17.667 13:41:34 -- nvmf/common.sh@47 -- # : 0 00:31:17.667 13:41:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:17.667 13:41:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:17.667 13:41:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:17.667 13:41:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:17.667 13:41:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:17.667 13:41:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:17.667 13:41:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:17.667 13:41:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:17.667 13:41:34 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:17.667 13:41:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:17.667 13:41:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:17.667 13:41:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:17.667 13:41:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:17.667 13:41:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:17.667 13:41:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.667 13:41:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:17.667 13:41:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:17.667 13:41:34 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:17.667 13:41:34 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:17.667 13:41:34 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:17.667 13:41:34 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:17.667 13:41:34 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:17.667 13:41:34 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:17.667 13:41:34 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:17.667 13:41:34 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:17.667 13:41:34 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:17.667 13:41:34 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:17.667 13:41:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:17.667 13:41:34 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:17.667 13:41:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:17.667 13:41:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:17.667 13:41:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:17.667 13:41:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:17.667 13:41:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:17.667 13:41:34 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:17.667 13:41:34 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:17.667 13:41:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:17.667 Cannot find device "nvmf_tgt_br" 00:31:17.667 13:41:34 -- nvmf/common.sh@155 -- # true 00:31:17.667 13:41:34 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:17.667 Cannot find device "nvmf_tgt_br2" 00:31:17.667 13:41:34 -- nvmf/common.sh@156 -- # true 00:31:17.667 13:41:34 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:17.667 13:41:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:17.667 Cannot find device "nvmf_tgt_br" 00:31:17.667 13:41:34 -- nvmf/common.sh@158 -- # true 00:31:17.667 13:41:34 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:17.667 Cannot find device "nvmf_tgt_br2" 00:31:17.667 13:41:34 -- nvmf/common.sh@159 -- # true 00:31:17.667 13:41:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:17.667 13:41:34 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:17.667 13:41:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:17.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:17.667 13:41:34 -- nvmf/common.sh@162 -- # true 00:31:17.667 13:41:34 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:17.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:17.667 13:41:34 -- nvmf/common.sh@163 -- # true 00:31:17.667 13:41:34 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:17.667 13:41:34 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:17.667 13:41:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:17.667 13:41:34 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:17.667 13:41:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:17.667 13:41:34 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:17.667 13:41:34 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:17.667 13:41:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:17.667 13:41:34 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:17.667 13:41:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:17.667 13:41:35 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:17.667 13:41:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:17.667 13:41:35 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:17.667 13:41:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:17.667 13:41:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:17.667 13:41:35 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:17.667 13:41:35 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:17.667 13:41:35 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:17.667 13:41:35 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:17.667 13:41:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:17.667 13:41:35 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:17.667 13:41:35 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:17.667 13:41:35 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:17.667 13:41:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:17.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:17.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:31:17.667 00:31:17.667 --- 10.0.0.2 ping statistics --- 00:31:17.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.667 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:31:17.667 13:41:35 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:17.667 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:17.667 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:31:17.667 00:31:17.667 --- 10.0.0.3 ping statistics --- 00:31:17.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.667 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:31:17.667 13:41:35 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:17.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:17.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:31:17.667 00:31:17.667 --- 10.0.0.1 ping statistics --- 00:31:17.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:17.667 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:31:17.667 13:41:35 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:17.667 13:41:35 -- nvmf/common.sh@422 -- # return 0 00:31:17.667 13:41:35 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:17.667 13:41:35 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:18.602 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:18.602 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:18.602 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:18.602 13:41:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:18.602 13:41:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:18.602 13:41:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:18.602 13:41:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:18.602 13:41:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:18.602 13:41:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:18.602 13:41:35 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:18.602 13:41:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:18.602 13:41:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:18.602 13:41:35 -- common/autotest_common.sh@10 -- # set +x 00:31:18.602 13:41:36 -- nvmf/common.sh@470 -- # nvmfpid=91805 00:31:18.602 13:41:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:18.602 13:41:36 -- nvmf/common.sh@471 -- # waitforlisten 91805 00:31:18.602 13:41:36 -- common/autotest_common.sh@817 -- # '[' -z 91805 ']' 00:31:18.602 13:41:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.602 13:41:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:18.602 13:41:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.602 13:41:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:18.602 13:41:36 -- common/autotest_common.sh@10 -- # set +x 00:31:18.888 [2024-04-26 13:41:36.066540] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:31:18.888 [2024-04-26 13:41:36.066659] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.889 [2024-04-26 13:41:36.208990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:19.194 [2024-04-26 13:41:36.338013] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:19.194 [2024-04-26 13:41:36.338082] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:19.194 [2024-04-26 13:41:36.338098] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:19.194 [2024-04-26 13:41:36.338109] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:19.194 [2024-04-26 13:41:36.338118] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:19.194 [2024-04-26 13:41:36.338303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.194 [2024-04-26 13:41:36.339022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:19.195 [2024-04-26 13:41:36.339111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:19.195 [2024-04-26 13:41:36.339117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.840 13:41:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:19.840 13:41:37 -- common/autotest_common.sh@850 -- # return 0 00:31:19.840 13:41:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:19.840 13:41:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:19.840 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:31:19.840 13:41:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:19.840 13:41:37 -- scripts/common.sh@309 -- # local bdf bdfs 00:31:19.840 13:41:37 -- scripts/common.sh@310 -- # local nvmes 00:31:19.840 13:41:37 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:31:19.840 13:41:37 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:31:19.840 13:41:37 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:31:19.840 13:41:37 -- scripts/common.sh@295 -- # local bdf= 00:31:19.840 13:41:37 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:31:19.840 13:41:37 -- scripts/common.sh@230 -- # local class 00:31:19.840 13:41:37 -- scripts/common.sh@231 -- # local subclass 00:31:19.840 13:41:37 -- scripts/common.sh@232 -- # local progif 00:31:19.840 13:41:37 -- scripts/common.sh@233 -- # printf %02x 1 00:31:19.840 13:41:37 -- scripts/common.sh@233 -- # class=01 00:31:19.840 13:41:37 -- scripts/common.sh@234 -- # printf %02x 8 00:31:19.840 13:41:37 -- scripts/common.sh@234 -- # subclass=08 00:31:19.840 13:41:37 -- scripts/common.sh@235 -- # printf %02x 2 00:31:19.840 13:41:37 -- scripts/common.sh@235 -- # progif=02 00:31:19.840 13:41:37 -- scripts/common.sh@237 -- # hash lspci 00:31:19.840 13:41:37 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:31:19.840 13:41:37 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:31:19.840 13:41:37 -- scripts/common.sh@240 -- # grep -i -- -p02 00:31:19.840 13:41:37 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:31:19.840 13:41:37 -- scripts/common.sh@242 -- # tr -d '"' 00:31:19.840 13:41:37 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:19.840 13:41:37 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:31:19.840 13:41:37 -- scripts/common.sh@15 -- # local i 00:31:19.840 13:41:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:31:19.840 13:41:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:19.840 13:41:37 -- scripts/common.sh@24 -- # return 0 00:31:19.840 13:41:37 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:31:19.840 13:41:37 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:31:19.840 13:41:37 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:31:19.840 13:41:37 -- scripts/common.sh@15 -- # local i 00:31:19.840 13:41:37 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:31:19.840 13:41:37 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:19.840 13:41:37 -- scripts/common.sh@24 -- # return 0 00:31:19.840 13:41:37 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:31:19.840 13:41:37 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:19.840 13:41:37 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:31:19.840 13:41:37 -- scripts/common.sh@320 -- # uname -s 00:31:19.840 13:41:37 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:19.840 13:41:37 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:19.840 13:41:37 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:19.840 13:41:37 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:31:19.840 13:41:37 -- scripts/common.sh@320 -- # uname -s 00:31:19.840 13:41:37 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:19.840 13:41:37 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:19.840 13:41:37 -- scripts/common.sh@325 -- # (( 2 )) 00:31:19.840 13:41:37 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:19.840 13:41:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:19.840 13:41:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:19.840 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:31:19.840 ************************************ 00:31:19.840 START TEST spdk_target_abort 00:31:19.840 ************************************ 00:31:19.840 13:41:37 -- common/autotest_common.sh@1111 -- # spdk_target 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:19.840 13:41:37 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:31:19.840 13:41:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:19.840 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:31:20.098 spdk_targetn1 00:31:20.098 13:41:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.098 13:41:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.098 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:31:20.098 [2024-04-26 13:41:37.327101] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.098 13:41:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:20.098 13:41:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.098 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:31:20.098 13:41:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:20.098 13:41:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.098 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:31:20.098 13:41:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:20.098 13:41:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:20.098 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:31:20.098 [2024-04-26 13:41:37.355320] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.098 13:41:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:20.098 13:41:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.099 13:41:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.099 13:41:37 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:20.099 13:41:37 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.099 13:41:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.099 13:41:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:23.386 Initializing NVMe Controllers 00:31:23.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:23.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:23.386 Initialization complete. Launching workers. 00:31:23.386 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11503, failed: 0 00:31:23.386 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1046, failed to submit 10457 00:31:23.386 success 733, unsuccess 313, failed 0 00:31:23.386 13:41:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:23.386 13:41:40 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:26.668 [2024-04-26 13:41:43.825818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec500 is same with the state(5) to be set 00:31:26.668 [2024-04-26 13:41:43.825873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec500 is same with the state(5) to be set 00:31:26.668 [2024-04-26 13:41:43.825886] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec500 is same with the state(5) to be set 00:31:26.668 [2024-04-26 13:41:43.825895] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfec500 is same with the state(5) to be set 00:31:26.668 Initializing NVMe Controllers 00:31:26.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:26.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:26.668 Initialization complete. Launching workers. 00:31:26.668 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5997, failed: 0 00:31:26.668 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 4734 00:31:26.668 success 278, unsuccess 985, failed 0 00:31:26.668 13:41:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:26.669 13:41:43 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:29.954 Initializing NVMe Controllers 00:31:29.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:29.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:29.955 Initialization complete. Launching workers. 00:31:29.955 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29777, failed: 0 00:31:29.955 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2559, failed to submit 27218 00:31:29.955 success 445, unsuccess 2114, failed 0 00:31:29.955 13:41:47 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:29.955 13:41:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.955 13:41:47 -- common/autotest_common.sh@10 -- # set +x 00:31:29.955 13:41:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:29.955 13:41:47 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:29.955 13:41:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:29.955 13:41:47 -- common/autotest_common.sh@10 -- # set +x 00:31:30.907 13:41:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:30.907 13:41:48 -- target/abort_qd_sizes.sh@61 -- # killprocess 91805 00:31:30.907 13:41:48 -- common/autotest_common.sh@936 -- # '[' -z 91805 ']' 00:31:30.907 13:41:48 -- common/autotest_common.sh@940 -- # kill -0 91805 00:31:30.907 13:41:48 -- common/autotest_common.sh@941 -- # uname 00:31:30.907 13:41:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:30.907 13:41:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91805 00:31:30.907 killing process with pid 91805 00:31:30.907 13:41:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:30.907 13:41:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:30.907 13:41:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91805' 00:31:30.907 13:41:48 -- common/autotest_common.sh@955 -- # kill 91805 00:31:30.907 13:41:48 -- common/autotest_common.sh@960 -- # wait 91805 00:31:31.166 ************************************ 00:31:31.166 END TEST spdk_target_abort 00:31:31.166 ************************************ 00:31:31.166 00:31:31.166 real 0m11.196s 00:31:31.166 user 0m45.775s 00:31:31.166 sys 0m1.744s 00:31:31.166 13:41:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:31.166 13:41:48 -- common/autotest_common.sh@10 -- # set +x 00:31:31.166 13:41:48 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:31.166 13:41:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:31.166 13:41:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:31.166 13:41:48 -- common/autotest_common.sh@10 -- # set +x 00:31:31.166 ************************************ 00:31:31.166 START TEST kernel_target_abort 00:31:31.166 ************************************ 00:31:31.166 13:41:48 -- common/autotest_common.sh@1111 -- # kernel_target 00:31:31.166 13:41:48 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:31.166 13:41:48 -- nvmf/common.sh@717 -- # local ip 00:31:31.166 13:41:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:31:31.166 13:41:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:31:31.166 13:41:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:31.166 13:41:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:31.166 13:41:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:31:31.166 13:41:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:31.166 13:41:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:31:31.166 13:41:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:31:31.166 13:41:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:31:31.166 13:41:48 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:31.166 13:41:48 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:31.166 13:41:48 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:31:31.166 13:41:48 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:31.166 13:41:48 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:31.166 13:41:48 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:31.166 13:41:48 -- nvmf/common.sh@628 -- # local block nvme 00:31:31.166 13:41:48 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:31:31.166 13:41:48 -- nvmf/common.sh@631 -- # modprobe nvmet 00:31:31.166 13:41:48 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:31.166 13:41:48 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:31.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:31.733 Waiting for block devices as requested 00:31:31.733 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:31.733 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:31.991 13:41:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.991 13:41:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:31:31.991 13:41:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:31:31.991 13:41:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:31.991 13:41:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:31:31.991 13:41:49 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:31.991 13:41:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:31:31.991 No valid GPT data, bailing 00:31:31.991 13:41:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:31.991 13:41:49 -- scripts/common.sh@391 -- # pt= 00:31:31.991 13:41:49 -- scripts/common.sh@392 -- # return 1 00:31:31.991 13:41:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:31:31.991 13:41:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.991 13:41:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:31:31.991 13:41:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:31:31.991 13:41:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:31:31.991 13:41:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:31:31.991 13:41:49 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:31:31.991 13:41:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:31:31.991 No valid GPT data, bailing 00:31:31.991 13:41:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:31:31.991 13:41:49 -- scripts/common.sh@391 -- # pt= 00:31:31.991 13:41:49 -- scripts/common.sh@392 -- # return 1 00:31:31.991 13:41:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:31:31.991 13:41:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.991 13:41:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:31:31.991 13:41:49 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:31:31.991 13:41:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:31:31.991 13:41:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:31:31.991 13:41:49 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:31:31.991 13:41:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:31:31.991 No valid GPT data, bailing 00:31:31.991 13:41:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:31:31.991 13:41:49 -- scripts/common.sh@391 -- # pt= 00:31:31.991 13:41:49 -- scripts/common.sh@392 -- # return 1 00:31:31.991 13:41:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:31:31.991 13:41:49 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:31:31.991 13:41:49 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:31:31.991 13:41:49 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:31:31.991 13:41:49 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:31:31.991 13:41:49 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:31:31.991 13:41:49 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:31:31.991 13:41:49 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:31:31.991 13:41:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:31:32.299 No valid GPT data, bailing 00:31:32.299 13:41:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:31:32.299 13:41:49 -- scripts/common.sh@391 -- # pt= 00:31:32.299 13:41:49 -- scripts/common.sh@392 -- # return 1 00:31:32.299 13:41:49 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:31:32.299 13:41:49 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:31:32.299 13:41:49 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:32.299 13:41:49 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:32.299 13:41:49 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:32.299 13:41:49 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:32.299 13:41:49 -- nvmf/common.sh@656 -- # echo 1 00:31:32.299 13:41:49 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:31:32.299 13:41:49 -- nvmf/common.sh@658 -- # echo 1 00:31:32.299 13:41:49 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:31:32.299 13:41:49 -- nvmf/common.sh@661 -- # echo tcp 00:31:32.299 13:41:49 -- nvmf/common.sh@662 -- # echo 4420 00:31:32.299 13:41:49 -- nvmf/common.sh@663 -- # echo ipv4 00:31:32.299 13:41:49 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:32.299 13:41:49 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 --hostid=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 -a 10.0.0.1 -t tcp -s 4420 00:31:32.299 00:31:32.299 Discovery Log Number of Records 2, Generation counter 2 00:31:32.299 =====Discovery Log Entry 0====== 00:31:32.299 trtype: tcp 00:31:32.299 adrfam: ipv4 00:31:32.299 subtype: current discovery subsystem 00:31:32.299 treq: not specified, sq flow control disable supported 00:31:32.299 portid: 1 00:31:32.299 trsvcid: 4420 00:31:32.299 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:32.299 traddr: 10.0.0.1 00:31:32.299 eflags: none 00:31:32.299 sectype: none 00:31:32.299 =====Discovery Log Entry 1====== 00:31:32.299 trtype: tcp 00:31:32.299 adrfam: ipv4 00:31:32.299 subtype: nvme subsystem 00:31:32.299 treq: not specified, sq flow control disable supported 00:31:32.299 portid: 1 00:31:32.299 trsvcid: 4420 00:31:32.299 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:32.299 traddr: 10.0.0.1 00:31:32.299 eflags: none 00:31:32.299 sectype: none 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:32.299 13:41:49 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:35.594 Initializing NVMe Controllers 00:31:35.594 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:35.594 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:35.594 Initialization complete. Launching workers. 00:31:35.594 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35265, failed: 0 00:31:35.594 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35265, failed to submit 0 00:31:35.594 success 0, unsuccess 35265, failed 0 00:31:35.594 13:41:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:35.594 13:41:52 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:38.879 Initializing NVMe Controllers 00:31:38.879 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:38.879 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:38.879 Initialization complete. Launching workers. 00:31:38.879 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69824, failed: 0 00:31:38.879 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29920, failed to submit 39904 00:31:38.879 success 0, unsuccess 29920, failed 0 00:31:38.879 13:41:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:38.879 13:41:55 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:42.164 Initializing NVMe Controllers 00:31:42.164 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:42.164 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:42.164 Initialization complete. Launching workers. 00:31:42.164 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78328, failed: 0 00:31:42.164 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19538, failed to submit 58790 00:31:42.164 success 0, unsuccess 19538, failed 0 00:31:42.164 13:41:59 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:42.164 13:41:59 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:42.164 13:41:59 -- nvmf/common.sh@675 -- # echo 0 00:31:42.164 13:41:59 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:42.164 13:41:59 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:42.164 13:41:59 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:42.164 13:41:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:42.164 13:41:59 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:31:42.164 13:41:59 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:31:42.164 13:41:59 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:42.422 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:44.323 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:44.323 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:44.615 ************************************ 00:31:44.615 END TEST kernel_target_abort 00:31:44.615 ************************************ 00:31:44.615 00:31:44.615 real 0m13.260s 00:31:44.615 user 0m6.319s 00:31:44.615 sys 0m4.205s 00:31:44.615 13:42:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:44.615 13:42:01 -- common/autotest_common.sh@10 -- # set +x 00:31:44.615 13:42:01 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:44.615 13:42:01 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:44.615 13:42:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:44.615 13:42:01 -- nvmf/common.sh@117 -- # sync 00:31:44.615 13:42:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:44.615 13:42:01 -- nvmf/common.sh@120 -- # set +e 00:31:44.615 13:42:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:44.615 13:42:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:44.615 rmmod nvme_tcp 00:31:44.615 rmmod nvme_fabrics 00:31:44.615 rmmod nvme_keyring 00:31:44.615 13:42:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:44.615 13:42:01 -- nvmf/common.sh@124 -- # set -e 00:31:44.615 13:42:01 -- nvmf/common.sh@125 -- # return 0 00:31:44.615 13:42:01 -- nvmf/common.sh@478 -- # '[' -n 91805 ']' 00:31:44.615 13:42:01 -- nvmf/common.sh@479 -- # killprocess 91805 00:31:44.615 13:42:01 -- common/autotest_common.sh@936 -- # '[' -z 91805 ']' 00:31:44.615 13:42:01 -- common/autotest_common.sh@940 -- # kill -0 91805 00:31:44.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91805) - No such process 00:31:44.615 Process with pid 91805 is not found 00:31:44.615 13:42:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91805 is not found' 00:31:44.615 13:42:01 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:44.615 13:42:01 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:44.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:44.898 Waiting for block devices as requested 00:31:45.157 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:45.157 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:45.157 13:42:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:45.157 13:42:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:45.157 13:42:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:45.157 13:42:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:45.157 13:42:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.157 13:42:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:45.157 13:42:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.157 13:42:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:45.157 00:31:45.157 real 0m27.938s 00:31:45.157 user 0m53.412s 00:31:45.157 sys 0m7.407s 00:31:45.157 13:42:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:45.157 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:31:45.157 ************************************ 00:31:45.157 END TEST nvmf_abort_qd_sizes 00:31:45.157 ************************************ 00:31:45.416 13:42:02 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:45.416 13:42:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:45.416 13:42:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:45.416 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:31:45.416 ************************************ 00:31:45.416 START TEST keyring_file 00:31:45.416 ************************************ 00:31:45.416 13:42:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:45.416 * Looking for test storage... 00:31:45.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:45.416 13:42:02 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:45.416 13:42:02 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:45.416 13:42:02 -- nvmf/common.sh@7 -- # uname -s 00:31:45.416 13:42:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.416 13:42:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.416 13:42:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.416 13:42:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.416 13:42:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.416 13:42:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.416 13:42:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.416 13:42:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.416 13:42:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.416 13:42:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.416 13:42:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:31:45.416 13:42:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae54e03c-6c6c-4f57-8ca7-352caf92cee7 00:31:45.416 13:42:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.416 13:42:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.416 13:42:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:45.416 13:42:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.416 13:42:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:45.416 13:42:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.416 13:42:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.416 13:42:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.416 13:42:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.416 13:42:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.416 13:42:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.416 13:42:02 -- paths/export.sh@5 -- # export PATH 00:31:45.416 13:42:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.416 13:42:02 -- nvmf/common.sh@47 -- # : 0 00:31:45.416 13:42:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:45.416 13:42:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:45.416 13:42:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.416 13:42:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.416 13:42:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.416 13:42:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:45.416 13:42:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:45.416 13:42:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:45.416 13:42:02 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:45.416 13:42:02 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:45.416 13:42:02 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:45.416 13:42:02 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:45.416 13:42:02 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:45.416 13:42:02 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:45.416 13:42:02 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:45.416 13:42:02 -- keyring/common.sh@15 -- # local name key digest path 00:31:45.416 13:42:02 -- keyring/common.sh@17 -- # name=key0 00:31:45.416 13:42:02 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:45.416 13:42:02 -- keyring/common.sh@17 -- # digest=0 00:31:45.416 13:42:02 -- keyring/common.sh@18 -- # mktemp 00:31:45.416 13:42:02 -- keyring/common.sh@18 -- # path=/tmp/tmp.DJO0Wyd34a 00:31:45.416 13:42:02 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:45.416 13:42:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:45.416 13:42:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:45.416 13:42:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:45.416 13:42:02 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:31:45.416 13:42:02 -- nvmf/common.sh@693 -- # digest=0 00:31:45.416 13:42:02 -- nvmf/common.sh@694 -- # python - 00:31:45.676 13:42:02 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DJO0Wyd34a 00:31:45.676 13:42:02 -- keyring/common.sh@23 -- # echo /tmp/tmp.DJO0Wyd34a 00:31:45.676 13:42:02 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DJO0Wyd34a 00:31:45.676 13:42:02 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:45.676 13:42:02 -- keyring/common.sh@15 -- # local name key digest path 00:31:45.676 13:42:02 -- keyring/common.sh@17 -- # name=key1 00:31:45.676 13:42:02 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:45.676 13:42:02 -- keyring/common.sh@17 -- # digest=0 00:31:45.676 13:42:02 -- keyring/common.sh@18 -- # mktemp 00:31:45.676 13:42:02 -- keyring/common.sh@18 -- # path=/tmp/tmp.LUuHbyrrZS 00:31:45.676 13:42:02 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:45.676 13:42:02 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:45.676 13:42:02 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:45.676 13:42:02 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:45.676 13:42:02 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:31:45.676 13:42:02 -- nvmf/common.sh@693 -- # digest=0 00:31:45.676 13:42:02 -- nvmf/common.sh@694 -- # python - 00:31:45.676 13:42:02 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LUuHbyrrZS 00:31:45.676 13:42:02 -- keyring/common.sh@23 -- # echo /tmp/tmp.LUuHbyrrZS 00:31:45.676 13:42:02 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LUuHbyrrZS 00:31:45.676 13:42:02 -- keyring/file.sh@30 -- # tgtpid=92711 00:31:45.676 13:42:02 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:45.676 13:42:02 -- keyring/file.sh@32 -- # waitforlisten 92711 00:31:45.676 13:42:02 -- common/autotest_common.sh@817 -- # '[' -z 92711 ']' 00:31:45.676 13:42:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.676 13:42:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:45.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.676 13:42:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.676 13:42:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:45.676 13:42:02 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 [2024-04-26 13:42:03.030566] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:31:45.676 [2024-04-26 13:42:03.030702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92711 ] 00:31:45.956 [2024-04-26 13:42:03.175682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.956 [2024-04-26 13:42:03.310982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.892 13:42:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:46.892 13:42:04 -- common/autotest_common.sh@850 -- # return 0 00:31:46.892 13:42:04 -- keyring/file.sh@33 -- # rpc_cmd 00:31:46.892 13:42:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.892 13:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:46.892 [2024-04-26 13:42:04.013260] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.892 null0 00:31:46.892 [2024-04-26 13:42:04.045173] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:46.892 [2024-04-26 13:42:04.045443] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:46.892 [2024-04-26 13:42:04.053179] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:46.892 13:42:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:46.892 13:42:04 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:46.892 13:42:04 -- common/autotest_common.sh@638 -- # local es=0 00:31:46.892 13:42:04 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:46.892 13:42:04 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:46.892 13:42:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:46.892 13:42:04 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:46.892 13:42:04 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:46.892 13:42:04 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:46.892 13:42:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:46.892 13:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:46.892 [2024-04-26 13:42:04.065185] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.2024/04/26 13:42:04 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:31:46.892 request: 00:31:46.892 { 00:31:46.892 "method": "nvmf_subsystem_add_listener", 00:31:46.892 "params": { 00:31:46.892 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:46.892 "secure_channel": false, 00:31:46.892 "listen_address": { 00:31:46.892 "trtype": "tcp", 00:31:46.892 "traddr": "127.0.0.1", 00:31:46.892 "trsvcid": "4420" 00:31:46.892 } 00:31:46.892 } 00:31:46.892 } 00:31:46.892 Got JSON-RPC error response 00:31:46.892 GoRPCClient: error on JSON-RPC call 00:31:46.892 13:42:04 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:46.892 13:42:04 -- common/autotest_common.sh@641 -- # es=1 00:31:46.892 13:42:04 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:46.892 13:42:04 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:46.892 13:42:04 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:46.892 13:42:04 -- keyring/file.sh@46 -- # bperfpid=92746 00:31:46.892 13:42:04 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:46.892 13:42:04 -- keyring/file.sh@48 -- # waitforlisten 92746 /var/tmp/bperf.sock 00:31:46.892 13:42:04 -- common/autotest_common.sh@817 -- # '[' -z 92746 ']' 00:31:46.892 13:42:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:46.892 13:42:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:46.892 13:42:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:46.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:46.892 13:42:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:46.892 13:42:04 -- common/autotest_common.sh@10 -- # set +x 00:31:46.893 [2024-04-26 13:42:04.134051] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:31:46.893 [2024-04-26 13:42:04.134172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92746 ] 00:31:46.893 [2024-04-26 13:42:04.274028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.152 [2024-04-26 13:42:04.397510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.089 13:42:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:48.089 13:42:05 -- common/autotest_common.sh@850 -- # return 0 00:31:48.089 13:42:05 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:48.089 13:42:05 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:48.089 13:42:05 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LUuHbyrrZS 00:31:48.089 13:42:05 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LUuHbyrrZS 00:31:48.347 13:42:05 -- keyring/file.sh@51 -- # jq -r .path 00:31:48.347 13:42:05 -- keyring/file.sh@51 -- # get_key key0 00:31:48.347 13:42:05 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:48.347 13:42:05 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.347 13:42:05 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.606 13:42:05 -- keyring/file.sh@51 -- # [[ /tmp/tmp.DJO0Wyd34a == \/\t\m\p\/\t\m\p\.\D\J\O\0\W\y\d\3\4\a ]] 00:31:48.606 13:42:05 -- keyring/file.sh@52 -- # get_key key1 00:31:48.606 13:42:05 -- keyring/file.sh@52 -- # jq -r .path 00:31:48.606 13:42:05 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.606 13:42:05 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:48.606 13:42:05 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:48.864 13:42:06 -- keyring/file.sh@52 -- # [[ /tmp/tmp.LUuHbyrrZS == \/\t\m\p\/\t\m\p\.\L\U\u\H\b\y\r\r\Z\S ]] 00:31:48.864 13:42:06 -- keyring/file.sh@53 -- # get_refcnt key0 00:31:48.864 13:42:06 -- keyring/common.sh@12 -- # get_key key0 00:31:48.864 13:42:06 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:48.864 13:42:06 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:48.864 13:42:06 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:48.864 13:42:06 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:49.123 13:42:06 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:49.123 13:42:06 -- keyring/file.sh@54 -- # get_refcnt key1 00:31:49.123 13:42:06 -- keyring/common.sh@12 -- # get_key key1 00:31:49.123 13:42:06 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:49.123 13:42:06 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:49.123 13:42:06 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:49.123 13:42:06 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:49.381 13:42:06 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:49.381 13:42:06 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:49.381 13:42:06 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:49.639 [2024-04-26 13:42:07.022034] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:49.897 nvme0n1 00:31:49.897 13:42:07 -- keyring/file.sh@59 -- # get_refcnt key0 00:31:49.897 13:42:07 -- keyring/common.sh@12 -- # get_key key0 00:31:49.898 13:42:07 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:49.898 13:42:07 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:49.898 13:42:07 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:49.898 13:42:07 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:50.156 13:42:07 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:50.156 13:42:07 -- keyring/file.sh@60 -- # get_refcnt key1 00:31:50.156 13:42:07 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:50.156 13:42:07 -- keyring/common.sh@12 -- # get_key key1 00:31:50.156 13:42:07 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:50.156 13:42:07 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:50.156 13:42:07 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:50.415 13:42:07 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:50.415 13:42:07 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:50.415 Running I/O for 1 seconds... 00:31:51.358 00:31:51.358 Latency(us) 00:31:51.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:51.358 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:51.358 nvme0n1 : 1.01 11232.47 43.88 0.00 0.00 11361.01 5362.04 51475.55 00:31:51.358 =================================================================================================================== 00:31:51.358 Total : 11232.47 43.88 0.00 0.00 11361.01 5362.04 51475.55 00:31:51.358 0 00:31:51.358 13:42:08 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:51.358 13:42:08 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:51.924 13:42:09 -- keyring/file.sh@65 -- # get_refcnt key0 00:31:51.924 13:42:09 -- keyring/common.sh@12 -- # get_key key0 00:31:51.924 13:42:09 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:51.924 13:42:09 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:51.924 13:42:09 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:51.924 13:42:09 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:51.924 13:42:09 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:51.924 13:42:09 -- keyring/file.sh@66 -- # get_refcnt key1 00:31:51.924 13:42:09 -- keyring/common.sh@12 -- # get_key key1 00:31:51.924 13:42:09 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:51.924 13:42:09 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:51.924 13:42:09 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:51.924 13:42:09 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:52.183 13:42:09 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:52.183 13:42:09 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:52.183 13:42:09 -- common/autotest_common.sh@638 -- # local es=0 00:31:52.183 13:42:09 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:52.183 13:42:09 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:52.183 13:42:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:52.183 13:42:09 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:52.183 13:42:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:52.183 13:42:09 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:52.183 13:42:09 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:52.442 [2024-04-26 13:42:09.830551] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:52.442 [2024-04-26 13:42:09.831450] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8d570 (107): Transport endpoint is not connected 00:31:52.442 [2024-04-26 13:42:09.832440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf8d570 (9): Bad file descriptor 00:31:52.442 [2024-04-26 13:42:09.833437] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:52.442 [2024-04-26 13:42:09.833459] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:52.442 [2024-04-26 13:42:09.833470] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:52.442 2024/04/26 13:42:09 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:31:52.442 request: 00:31:52.442 { 00:31:52.442 "method": "bdev_nvme_attach_controller", 00:31:52.442 "params": { 00:31:52.442 "name": "nvme0", 00:31:52.442 "trtype": "tcp", 00:31:52.442 "traddr": "127.0.0.1", 00:31:52.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:52.442 "adrfam": "ipv4", 00:31:52.442 "trsvcid": "4420", 00:31:52.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.442 "psk": "key1" 00:31:52.442 } 00:31:52.442 } 00:31:52.442 Got JSON-RPC error response 00:31:52.442 GoRPCClient: error on JSON-RPC call 00:31:52.442 13:42:09 -- common/autotest_common.sh@641 -- # es=1 00:31:52.442 13:42:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:52.442 13:42:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:52.442 13:42:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:52.442 13:42:09 -- keyring/file.sh@71 -- # get_refcnt key0 00:31:52.442 13:42:09 -- keyring/common.sh@12 -- # get_key key0 00:31:52.442 13:42:09 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:52.442 13:42:09 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:52.442 13:42:09 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:52.442 13:42:09 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.011 13:42:10 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:53.011 13:42:10 -- keyring/file.sh@72 -- # get_refcnt key1 00:31:53.011 13:42:10 -- keyring/common.sh@12 -- # get_key key1 00:31:53.011 13:42:10 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:53.011 13:42:10 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.011 13:42:10 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.011 13:42:10 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:53.011 13:42:10 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:53.011 13:42:10 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:53.011 13:42:10 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:53.578 13:42:10 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:53.578 13:42:10 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:53.578 13:42:11 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:53.578 13:42:11 -- keyring/file.sh@77 -- # jq length 00:31:53.578 13:42:11 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:54.144 13:42:11 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:54.144 13:42:11 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DJO0Wyd34a 00:31:54.144 13:42:11 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:54.144 13:42:11 -- common/autotest_common.sh@638 -- # local es=0 00:31:54.144 13:42:11 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:54.144 13:42:11 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:54.144 13:42:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:54.144 13:42:11 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:54.144 13:42:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:54.144 13:42:11 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:54.144 13:42:11 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:54.402 [2024-04-26 13:42:11.618795] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DJO0Wyd34a': 0100660 00:31:54.402 [2024-04-26 13:42:11.618846] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:54.402 2024/04/26 13:42:11 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.DJO0Wyd34a], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:54.402 request: 00:31:54.402 { 00:31:54.402 "method": "keyring_file_add_key", 00:31:54.402 "params": { 00:31:54.402 "name": "key0", 00:31:54.402 "path": "/tmp/tmp.DJO0Wyd34a" 00:31:54.402 } 00:31:54.402 } 00:31:54.402 Got JSON-RPC error response 00:31:54.402 GoRPCClient: error on JSON-RPC call 00:31:54.402 13:42:11 -- common/autotest_common.sh@641 -- # es=1 00:31:54.402 13:42:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:54.402 13:42:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:54.402 13:42:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:54.402 13:42:11 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DJO0Wyd34a 00:31:54.402 13:42:11 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:54.402 13:42:11 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DJO0Wyd34a 00:31:54.660 13:42:11 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DJO0Wyd34a 00:31:54.660 13:42:11 -- keyring/file.sh@88 -- # get_refcnt key0 00:31:54.660 13:42:11 -- keyring/common.sh@12 -- # get_key key0 00:31:54.660 13:42:11 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:54.660 13:42:11 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:54.660 13:42:11 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:54.660 13:42:11 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:54.918 13:42:12 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:54.918 13:42:12 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:54.918 13:42:12 -- common/autotest_common.sh@638 -- # local es=0 00:31:54.918 13:42:12 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:54.918 13:42:12 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:54.918 13:42:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:54.918 13:42:12 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:54.918 13:42:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:54.918 13:42:12 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:54.918 13:42:12 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:55.176 [2024-04-26 13:42:12.391080] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DJO0Wyd34a': No such file or directory 00:31:55.177 [2024-04-26 13:42:12.391136] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:55.177 [2024-04-26 13:42:12.391164] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:55.177 [2024-04-26 13:42:12.391174] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:55.177 [2024-04-26 13:42:12.391184] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:55.177 2024/04/26 13:42:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:31:55.177 request: 00:31:55.177 { 00:31:55.177 "method": "bdev_nvme_attach_controller", 00:31:55.177 "params": { 00:31:55.177 "name": "nvme0", 00:31:55.177 "trtype": "tcp", 00:31:55.177 "traddr": "127.0.0.1", 00:31:55.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:55.177 "adrfam": "ipv4", 00:31:55.177 "trsvcid": "4420", 00:31:55.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:55.177 "psk": "key0" 00:31:55.177 } 00:31:55.177 } 00:31:55.177 Got JSON-RPC error response 00:31:55.177 GoRPCClient: error on JSON-RPC call 00:31:55.177 13:42:12 -- common/autotest_common.sh@641 -- # es=1 00:31:55.177 13:42:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:55.177 13:42:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:55.177 13:42:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:55.177 13:42:12 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:55.177 13:42:12 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:55.435 13:42:12 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:55.435 13:42:12 -- keyring/common.sh@15 -- # local name key digest path 00:31:55.435 13:42:12 -- keyring/common.sh@17 -- # name=key0 00:31:55.435 13:42:12 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:55.435 13:42:12 -- keyring/common.sh@17 -- # digest=0 00:31:55.435 13:42:12 -- keyring/common.sh@18 -- # mktemp 00:31:55.435 13:42:12 -- keyring/common.sh@18 -- # path=/tmp/tmp.kbk33VWRUw 00:31:55.435 13:42:12 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:55.435 13:42:12 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:55.435 13:42:12 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:55.435 13:42:12 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:55.435 13:42:12 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:31:55.435 13:42:12 -- nvmf/common.sh@693 -- # digest=0 00:31:55.435 13:42:12 -- nvmf/common.sh@694 -- # python - 00:31:55.435 13:42:12 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.kbk33VWRUw 00:31:55.435 13:42:12 -- keyring/common.sh@23 -- # echo /tmp/tmp.kbk33VWRUw 00:31:55.435 13:42:12 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.kbk33VWRUw 00:31:55.435 13:42:12 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kbk33VWRUw 00:31:55.435 13:42:12 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kbk33VWRUw 00:31:55.693 13:42:12 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:55.693 13:42:12 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:55.951 nvme0n1 00:31:55.951 13:42:13 -- keyring/file.sh@99 -- # get_refcnt key0 00:31:55.951 13:42:13 -- keyring/common.sh@12 -- # get_key key0 00:31:55.951 13:42:13 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:55.951 13:42:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:55.951 13:42:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:55.951 13:42:13 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.209 13:42:13 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:56.209 13:42:13 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:56.209 13:42:13 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:56.467 13:42:13 -- keyring/file.sh@101 -- # jq -r .removed 00:31:56.467 13:42:13 -- keyring/file.sh@101 -- # get_key key0 00:31:56.467 13:42:13 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:56.467 13:42:13 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.467 13:42:13 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.736 13:42:14 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:56.736 13:42:14 -- keyring/file.sh@102 -- # get_refcnt key0 00:31:56.737 13:42:14 -- keyring/common.sh@12 -- # get_key key0 00:31:56.737 13:42:14 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.737 13:42:14 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.737 13:42:14 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:56.737 13:42:14 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.995 13:42:14 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:56.995 13:42:14 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:56.995 13:42:14 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:57.562 13:42:14 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:57.562 13:42:14 -- keyring/file.sh@104 -- # jq length 00:31:57.562 13:42:14 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.562 13:42:14 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:57.562 13:42:14 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.kbk33VWRUw 00:31:57.562 13:42:14 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.kbk33VWRUw 00:31:57.821 13:42:15 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LUuHbyrrZS 00:31:57.821 13:42:15 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LUuHbyrrZS 00:31:58.079 13:42:15 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:58.079 13:42:15 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:58.646 nvme0n1 00:31:58.646 13:42:15 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:58.646 13:42:15 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:58.905 13:42:16 -- keyring/file.sh@112 -- # config='{ 00:31:58.905 "subsystems": [ 00:31:58.905 { 00:31:58.905 "subsystem": "keyring", 00:31:58.905 "config": [ 00:31:58.905 { 00:31:58.905 "method": "keyring_file_add_key", 00:31:58.905 "params": { 00:31:58.905 "name": "key0", 00:31:58.905 "path": "/tmp/tmp.kbk33VWRUw" 00:31:58.905 } 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "method": "keyring_file_add_key", 00:31:58.905 "params": { 00:31:58.905 "name": "key1", 00:31:58.905 "path": "/tmp/tmp.LUuHbyrrZS" 00:31:58.905 } 00:31:58.905 } 00:31:58.905 ] 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "subsystem": "iobuf", 00:31:58.905 "config": [ 00:31:58.905 { 00:31:58.905 "method": "iobuf_set_options", 00:31:58.905 "params": { 00:31:58.905 "large_bufsize": 135168, 00:31:58.905 "large_pool_count": 1024, 00:31:58.905 "small_bufsize": 8192, 00:31:58.905 "small_pool_count": 8192 00:31:58.905 } 00:31:58.905 } 00:31:58.905 ] 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "subsystem": "sock", 00:31:58.905 "config": [ 00:31:58.905 { 00:31:58.905 "method": "sock_impl_set_options", 00:31:58.905 "params": { 00:31:58.905 "enable_ktls": false, 00:31:58.905 "enable_placement_id": 0, 00:31:58.905 "enable_quickack": false, 00:31:58.905 "enable_recv_pipe": true, 00:31:58.905 "enable_zerocopy_send_client": false, 00:31:58.905 "enable_zerocopy_send_server": true, 00:31:58.905 "impl_name": "posix", 00:31:58.905 "recv_buf_size": 2097152, 00:31:58.905 "send_buf_size": 2097152, 00:31:58.905 "tls_version": 0, 00:31:58.905 "zerocopy_threshold": 0 00:31:58.905 } 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "method": "sock_impl_set_options", 00:31:58.905 "params": { 00:31:58.905 "enable_ktls": false, 00:31:58.905 "enable_placement_id": 0, 00:31:58.905 "enable_quickack": false, 00:31:58.905 "enable_recv_pipe": true, 00:31:58.905 "enable_zerocopy_send_client": false, 00:31:58.905 "enable_zerocopy_send_server": true, 00:31:58.905 "impl_name": "ssl", 00:31:58.905 "recv_buf_size": 4096, 00:31:58.905 "send_buf_size": 4096, 00:31:58.905 "tls_version": 0, 00:31:58.905 "zerocopy_threshold": 0 00:31:58.905 } 00:31:58.905 } 00:31:58.905 ] 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "subsystem": "vmd", 00:31:58.905 "config": [] 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "subsystem": "accel", 00:31:58.905 "config": [ 00:31:58.905 { 00:31:58.905 "method": "accel_set_options", 00:31:58.905 "params": { 00:31:58.905 "buf_count": 2048, 00:31:58.905 "large_cache_size": 16, 00:31:58.905 "sequence_count": 2048, 00:31:58.905 "small_cache_size": 128, 00:31:58.905 "task_count": 2048 00:31:58.905 } 00:31:58.905 } 00:31:58.905 ] 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "subsystem": "bdev", 00:31:58.905 "config": [ 00:31:58.905 { 00:31:58.905 "method": "bdev_set_options", 00:31:58.905 "params": { 00:31:58.905 "bdev_auto_examine": true, 00:31:58.905 "bdev_io_cache_size": 256, 00:31:58.905 "bdev_io_pool_size": 65535, 00:31:58.905 "iobuf_large_cache_size": 16, 00:31:58.905 "iobuf_small_cache_size": 128 00:31:58.905 } 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "method": "bdev_raid_set_options", 00:31:58.905 "params": { 00:31:58.905 "process_window_size_kb": 1024 00:31:58.905 } 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "method": "bdev_iscsi_set_options", 00:31:58.905 "params": { 00:31:58.905 "timeout_sec": 30 00:31:58.905 } 00:31:58.905 }, 00:31:58.905 { 00:31:58.905 "method": "bdev_nvme_set_options", 00:31:58.905 "params": { 00:31:58.906 "action_on_timeout": "none", 00:31:58.906 "allow_accel_sequence": false, 00:31:58.906 "arbitration_burst": 0, 00:31:58.906 "bdev_retry_count": 3, 00:31:58.906 "ctrlr_loss_timeout_sec": 0, 00:31:58.906 "delay_cmd_submit": true, 00:31:58.906 "dhchap_dhgroups": [ 00:31:58.906 "null", 00:31:58.906 "ffdhe2048", 00:31:58.906 "ffdhe3072", 00:31:58.906 "ffdhe4096", 00:31:58.906 "ffdhe6144", 00:31:58.906 "ffdhe8192" 00:31:58.906 ], 00:31:58.906 "dhchap_digests": [ 00:31:58.906 "sha256", 00:31:58.906 "sha384", 00:31:58.906 "sha512" 00:31:58.906 ], 00:31:58.906 "disable_auto_failback": false, 00:31:58.906 "fast_io_fail_timeout_sec": 0, 00:31:58.906 "generate_uuids": false, 00:31:58.906 "high_priority_weight": 0, 00:31:58.906 "io_path_stat": false, 00:31:58.906 "io_queue_requests": 512, 00:31:58.906 "keep_alive_timeout_ms": 10000, 00:31:58.906 "low_priority_weight": 0, 00:31:58.906 "medium_priority_weight": 0, 00:31:58.906 "nvme_adminq_poll_period_us": 10000, 00:31:58.906 "nvme_error_stat": false, 00:31:58.906 "nvme_ioq_poll_period_us": 0, 00:31:58.906 "rdma_cm_event_timeout_ms": 0, 00:31:58.906 "rdma_max_cq_size": 0, 00:31:58.906 "rdma_srq_size": 0, 00:31:58.906 "reconnect_delay_sec": 0, 00:31:58.906 "timeout_admin_us": 0, 00:31:58.906 "timeout_us": 0, 00:31:58.906 "transport_ack_timeout": 0, 00:31:58.906 "transport_retry_count": 4, 00:31:58.906 "transport_tos": 0 00:31:58.906 } 00:31:58.906 }, 00:31:58.906 { 00:31:58.906 "method": "bdev_nvme_attach_controller", 00:31:58.906 "params": { 00:31:58.906 "adrfam": "IPv4", 00:31:58.906 "ctrlr_loss_timeout_sec": 0, 00:31:58.906 "ddgst": false, 00:31:58.906 "fast_io_fail_timeout_sec": 0, 00:31:58.906 "hdgst": false, 00:31:58.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.906 "name": "nvme0", 00:31:58.906 "prchk_guard": false, 00:31:58.906 "prchk_reftag": false, 00:31:58.906 "psk": "key0", 00:31:58.906 "reconnect_delay_sec": 0, 00:31:58.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.906 "traddr": "127.0.0.1", 00:31:58.906 "trsvcid": "4420", 00:31:58.906 "trtype": "TCP" 00:31:58.906 } 00:31:58.906 }, 00:31:58.906 { 00:31:58.906 "method": "bdev_nvme_set_hotplug", 00:31:58.906 "params": { 00:31:58.906 "enable": false, 00:31:58.906 "period_us": 100000 00:31:58.906 } 00:31:58.906 }, 00:31:58.906 { 00:31:58.906 "method": "bdev_wait_for_examine" 00:31:58.906 } 00:31:58.906 ] 00:31:58.906 }, 00:31:58.906 { 00:31:58.906 "subsystem": "nbd", 00:31:58.906 "config": [] 00:31:58.906 } 00:31:58.906 ] 00:31:58.906 }' 00:31:58.906 13:42:16 -- keyring/file.sh@114 -- # killprocess 92746 00:31:58.906 13:42:16 -- common/autotest_common.sh@936 -- # '[' -z 92746 ']' 00:31:58.906 13:42:16 -- common/autotest_common.sh@940 -- # kill -0 92746 00:31:58.906 13:42:16 -- common/autotest_common.sh@941 -- # uname 00:31:58.906 13:42:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:58.906 13:42:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92746 00:31:58.906 13:42:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:58.906 13:42:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:58.906 killing process with pid 92746 00:31:58.906 13:42:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92746' 00:31:58.906 Received shutdown signal, test time was about 1.000000 seconds 00:31:58.906 00:31:58.906 Latency(us) 00:31:58.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:58.906 =================================================================================================================== 00:31:58.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:58.906 13:42:16 -- common/autotest_common.sh@955 -- # kill 92746 00:31:58.906 13:42:16 -- common/autotest_common.sh@960 -- # wait 92746 00:31:59.165 13:42:16 -- keyring/file.sh@117 -- # bperfpid=93223 00:31:59.165 13:42:16 -- keyring/file.sh@119 -- # waitforlisten 93223 /var/tmp/bperf.sock 00:31:59.165 13:42:16 -- common/autotest_common.sh@817 -- # '[' -z 93223 ']' 00:31:59.165 13:42:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:59.165 13:42:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:59.165 13:42:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:59.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:59.165 13:42:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:59.165 13:42:16 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:59.165 13:42:16 -- common/autotest_common.sh@10 -- # set +x 00:31:59.165 13:42:16 -- keyring/file.sh@115 -- # echo '{ 00:31:59.165 "subsystems": [ 00:31:59.165 { 00:31:59.165 "subsystem": "keyring", 00:31:59.165 "config": [ 00:31:59.165 { 00:31:59.165 "method": "keyring_file_add_key", 00:31:59.165 "params": { 00:31:59.165 "name": "key0", 00:31:59.165 "path": "/tmp/tmp.kbk33VWRUw" 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "keyring_file_add_key", 00:31:59.165 "params": { 00:31:59.165 "name": "key1", 00:31:59.165 "path": "/tmp/tmp.LUuHbyrrZS" 00:31:59.165 } 00:31:59.165 } 00:31:59.165 ] 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "subsystem": "iobuf", 00:31:59.165 "config": [ 00:31:59.165 { 00:31:59.165 "method": "iobuf_set_options", 00:31:59.165 "params": { 00:31:59.165 "large_bufsize": 135168, 00:31:59.165 "large_pool_count": 1024, 00:31:59.165 "small_bufsize": 8192, 00:31:59.165 "small_pool_count": 8192 00:31:59.165 } 00:31:59.165 } 00:31:59.165 ] 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "subsystem": "sock", 00:31:59.165 "config": [ 00:31:59.165 { 00:31:59.165 "method": "sock_impl_set_options", 00:31:59.165 "params": { 00:31:59.165 "enable_ktls": false, 00:31:59.165 "enable_placement_id": 0, 00:31:59.165 "enable_quickack": false, 00:31:59.165 "enable_recv_pipe": true, 00:31:59.165 "enable_zerocopy_send_client": false, 00:31:59.165 "enable_zerocopy_send_server": true, 00:31:59.165 "impl_name": "posix", 00:31:59.165 "recv_buf_size": 2097152, 00:31:59.165 "send_buf_size": 2097152, 00:31:59.165 "tls_version": 0, 00:31:59.165 "zerocopy_threshold": 0 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "sock_impl_set_options", 00:31:59.165 "params": { 00:31:59.165 "enable_ktls": false, 00:31:59.165 "enable_placement_id": 0, 00:31:59.165 "enable_quickack": false, 00:31:59.165 "enable_recv_pipe": true, 00:31:59.165 "enable_zerocopy_send_client": false, 00:31:59.165 "enable_zerocopy_send_server": true, 00:31:59.165 "impl_name": "ssl", 00:31:59.165 "recv_buf_size": 4096, 00:31:59.165 "send_buf_size": 4096, 00:31:59.165 "tls_version": 0, 00:31:59.165 "zerocopy_threshold": 0 00:31:59.165 } 00:31:59.165 } 00:31:59.165 ] 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "subsystem": "vmd", 00:31:59.165 "config": [] 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "subsystem": "accel", 00:31:59.165 "config": [ 00:31:59.165 { 00:31:59.165 "method": "accel_set_options", 00:31:59.165 "params": { 00:31:59.165 "buf_count": 2048, 00:31:59.165 "large_cache_size": 16, 00:31:59.165 "sequence_count": 2048, 00:31:59.165 "small_cache_size": 128, 00:31:59.165 "task_count": 2048 00:31:59.165 } 00:31:59.165 } 00:31:59.165 ] 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "subsystem": "bdev", 00:31:59.165 "config": [ 00:31:59.165 { 00:31:59.165 "method": "bdev_set_options", 00:31:59.165 "params": { 00:31:59.165 "bdev_auto_examine": true, 00:31:59.165 "bdev_io_cache_size": 256, 00:31:59.165 "bdev_io_pool_size": 65535, 00:31:59.165 "iobuf_large_cache_size": 16, 00:31:59.165 "iobuf_small_cache_size": 128 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "bdev_raid_set_options", 00:31:59.165 "params": { 00:31:59.165 "process_window_size_kb": 1024 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "bdev_iscsi_set_options", 00:31:59.165 "params": { 00:31:59.165 "timeout_sec": 30 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "bdev_nvme_set_options", 00:31:59.165 "params": { 00:31:59.165 "action_on_timeout": "none", 00:31:59.165 "allow_accel_sequence": false, 00:31:59.165 "arbitration_burst": 0, 00:31:59.165 "bdev_retry_count": 3, 00:31:59.165 "ctrlr_loss_timeout_sec": 0, 00:31:59.165 "delay_cmd_submit": true, 00:31:59.165 "dhchap_dhgroups": [ 00:31:59.165 "null", 00:31:59.165 "ffdhe2048", 00:31:59.165 "ffdhe3072", 00:31:59.165 "ffdhe4096", 00:31:59.165 "ffdhe6144", 00:31:59.165 "ffdhe8192" 00:31:59.165 ], 00:31:59.165 "dhchap_digests": [ 00:31:59.165 "sha256", 00:31:59.165 "sha384", 00:31:59.165 "sha512" 00:31:59.165 ], 00:31:59.165 "disable_auto_failback": false, 00:31:59.165 "fast_io_fail_timeout_sec": 0, 00:31:59.165 "generate_uuids": false, 00:31:59.165 "high_priority_weight": 0, 00:31:59.165 "io_path_stat": false, 00:31:59.165 "io_queue_requests": 512, 00:31:59.165 "keep_alive_timeout_ms": 10000, 00:31:59.165 "low_priority_weight": 0, 00:31:59.165 "medium_priority_weight": 0, 00:31:59.165 "nvme_adminq_poll_period_us": 10000, 00:31:59.165 "nvme_error_stat": false, 00:31:59.165 "nvme_ioq_poll_period_us": 0, 00:31:59.165 "rdma_cm_event_timeout_ms": 0, 00:31:59.165 "rdma_max_cq_size": 0, 00:31:59.165 "rdma_srq_size": 0, 00:31:59.165 "reconnect_delay_sec": 0, 00:31:59.165 "timeout_admin_us": 0, 00:31:59.165 "timeout_us": 0, 00:31:59.165 "transport_ack_timeout": 0, 00:31:59.165 "transport_retry_count": 4, 00:31:59.165 "transport_tos": 0 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "bdev_nvme_attach_controller", 00:31:59.165 "params": { 00:31:59.165 "adrfam": "IPv4", 00:31:59.165 "ctrlr_loss_timeout_sec": 0, 00:31:59.165 "ddgst": false, 00:31:59.165 "fast_io_fail_timeout_sec": 0, 00:31:59.165 "hdgst": false, 00:31:59.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:59.165 "name": "nvme0", 00:31:59.165 "prchk_guard": false, 00:31:59.165 "prchk_reftag": false, 00:31:59.165 "psk": "key0", 00:31:59.165 "reconnect_delay_sec": 0, 00:31:59.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:59.165 "traddr": "127.0.0.1", 00:31:59.165 "trsvcid": "4420", 00:31:59.165 "trtype": "TCP" 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "bdev_nvme_set_hotplug", 00:31:59.165 "params": { 00:31:59.165 "enable": false, 00:31:59.165 "period_us": 100000 00:31:59.165 } 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "method": "bdev_wait_for_examine" 00:31:59.165 } 00:31:59.165 ] 00:31:59.165 }, 00:31:59.165 { 00:31:59.165 "subsystem": "nbd", 00:31:59.165 "config": [] 00:31:59.165 } 00:31:59.165 ] 00:31:59.165 }' 00:31:59.165 [2024-04-26 13:42:16.507094] Starting SPDK v24.05-pre git sha1 f93182c78 / DPDK 23.11.0 initialization... 00:31:59.165 [2024-04-26 13:42:16.508129] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93223 ] 00:31:59.424 [2024-04-26 13:42:16.643361] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.424 [2024-04-26 13:42:16.747303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.682 [2024-04-26 13:42:16.922237] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:00.288 13:42:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:00.288 13:42:17 -- common/autotest_common.sh@850 -- # return 0 00:32:00.288 13:42:17 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:00.288 13:42:17 -- keyring/file.sh@120 -- # jq length 00:32:00.288 13:42:17 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.288 13:42:17 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:00.288 13:42:17 -- keyring/file.sh@121 -- # get_refcnt key0 00:32:00.288 13:42:17 -- keyring/common.sh@12 -- # get_key key0 00:32:00.288 13:42:17 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.288 13:42:17 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.288 13:42:17 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.288 13:42:17 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:00.855 13:42:18 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:00.855 13:42:18 -- keyring/file.sh@122 -- # get_refcnt key1 00:32:00.855 13:42:18 -- keyring/common.sh@12 -- # get_key key1 00:32:00.855 13:42:18 -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:00.855 13:42:18 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:00.855 13:42:18 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:00.855 13:42:18 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:00.855 13:42:18 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:00.855 13:42:18 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:00.855 13:42:18 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:00.855 13:42:18 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:01.113 13:42:18 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:01.113 13:42:18 -- keyring/file.sh@1 -- # cleanup 00:32:01.113 13:42:18 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.kbk33VWRUw /tmp/tmp.LUuHbyrrZS 00:32:01.371 13:42:18 -- keyring/file.sh@20 -- # killprocess 93223 00:32:01.372 13:42:18 -- common/autotest_common.sh@936 -- # '[' -z 93223 ']' 00:32:01.372 13:42:18 -- common/autotest_common.sh@940 -- # kill -0 93223 00:32:01.372 13:42:18 -- common/autotest_common.sh@941 -- # uname 00:32:01.372 13:42:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:01.372 13:42:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93223 00:32:01.372 killing process with pid 93223 00:32:01.372 Received shutdown signal, test time was about 1.000000 seconds 00:32:01.372 00:32:01.372 Latency(us) 00:32:01.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:01.372 =================================================================================================================== 00:32:01.372 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:01.372 13:42:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:32:01.372 13:42:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:32:01.372 13:42:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93223' 00:32:01.372 13:42:18 -- common/autotest_common.sh@955 -- # kill 93223 00:32:01.372 13:42:18 -- common/autotest_common.sh@960 -- # wait 93223 00:32:01.631 13:42:18 -- keyring/file.sh@21 -- # killprocess 92711 00:32:01.631 13:42:18 -- common/autotest_common.sh@936 -- # '[' -z 92711 ']' 00:32:01.631 13:42:18 -- common/autotest_common.sh@940 -- # kill -0 92711 00:32:01.631 13:42:18 -- common/autotest_common.sh@941 -- # uname 00:32:01.631 13:42:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:01.631 13:42:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92711 00:32:01.631 killing process with pid 92711 00:32:01.631 13:42:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:01.631 13:42:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:01.631 13:42:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92711' 00:32:01.631 13:42:18 -- common/autotest_common.sh@955 -- # kill 92711 00:32:01.631 [2024-04-26 13:42:18.862500] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:01.631 13:42:18 -- common/autotest_common.sh@960 -- # wait 92711 00:32:01.890 00:32:01.890 real 0m16.592s 00:32:01.890 user 0m41.224s 00:32:01.890 sys 0m3.459s 00:32:01.890 ************************************ 00:32:01.890 END TEST keyring_file 00:32:01.890 ************************************ 00:32:01.890 13:42:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:01.890 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 13:42:19 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:32:02.148 13:42:19 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:32:02.148 13:42:19 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:32:02.148 13:42:19 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:32:02.148 13:42:19 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:32:02.148 13:42:19 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:32:02.148 13:42:19 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:32:02.148 13:42:19 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:32:02.148 13:42:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:02.148 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:32:02.148 13:42:19 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:32:02.148 13:42:19 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:32:02.148 13:42:19 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:32:02.148 13:42:19 -- common/autotest_common.sh@10 -- # set +x 00:32:04.076 INFO: APP EXITING 00:32:04.076 INFO: killing all VMs 00:32:04.076 INFO: killing vhost app 00:32:04.076 INFO: EXIT DONE 00:32:04.335 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:04.335 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:04.335 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:05.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:05.272 Cleaning 00:32:05.272 Removing: /var/run/dpdk/spdk0/config 00:32:05.272 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:05.272 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:05.272 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:05.272 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:05.272 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:05.272 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:05.272 Removing: /var/run/dpdk/spdk1/config 00:32:05.272 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:05.272 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:05.272 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:05.272 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:05.272 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:05.272 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:05.272 Removing: /var/run/dpdk/spdk2/config 00:32:05.272 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:05.272 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:05.272 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:05.272 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:05.272 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:05.272 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:05.272 Removing: /var/run/dpdk/spdk3/config 00:32:05.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:05.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:05.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:05.272 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:05.272 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:05.272 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:05.272 Removing: /var/run/dpdk/spdk4/config 00:32:05.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:05.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:05.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:05.272 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:05.272 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:05.272 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:05.272 Removing: /dev/shm/nvmf_trace.0 00:32:05.272 Removing: /dev/shm/spdk_tgt_trace.pid60135 00:32:05.272 Removing: /var/run/dpdk/spdk0 00:32:05.272 Removing: /var/run/dpdk/spdk1 00:32:05.272 Removing: /var/run/dpdk/spdk2 00:32:05.272 Removing: /var/run/dpdk/spdk3 00:32:05.272 Removing: /var/run/dpdk/spdk4 00:32:05.272 Removing: /var/run/dpdk/spdk_pid59967 00:32:05.272 Removing: /var/run/dpdk/spdk_pid60135 00:32:05.272 Removing: /var/run/dpdk/spdk_pid60439 00:32:05.272 Removing: /var/run/dpdk/spdk_pid60530 00:32:05.272 Removing: /var/run/dpdk/spdk_pid60575 00:32:05.272 Removing: /var/run/dpdk/spdk_pid60687 00:32:05.272 Removing: /var/run/dpdk/spdk_pid60717 00:32:05.272 Removing: /var/run/dpdk/spdk_pid60856 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61132 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61308 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61396 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61493 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61593 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61635 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61675 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61741 00:32:05.272 Removing: /var/run/dpdk/spdk_pid61867 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62507 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62576 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62649 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62677 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62767 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62800 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62883 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62911 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62967 00:32:05.272 Removing: /var/run/dpdk/spdk_pid62997 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63052 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63082 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63238 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63283 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63357 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63442 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63477 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63553 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63586 00:32:05.272 Removing: /var/run/dpdk/spdk_pid63630 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63663 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63707 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63751 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63788 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63832 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63872 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63910 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63954 00:32:05.531 Removing: /var/run/dpdk/spdk_pid63993 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64031 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64070 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64108 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64147 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64191 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64234 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64276 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64321 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64355 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64430 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64551 00:32:05.531 Removing: /var/run/dpdk/spdk_pid64988 00:32:05.531 Removing: /var/run/dpdk/spdk_pid68429 00:32:05.531 Removing: /var/run/dpdk/spdk_pid68777 00:32:05.531 Removing: /var/run/dpdk/spdk_pid69985 00:32:05.531 Removing: /var/run/dpdk/spdk_pid70367 00:32:05.531 Removing: /var/run/dpdk/spdk_pid70631 00:32:05.531 Removing: /var/run/dpdk/spdk_pid70677 00:32:05.531 Removing: /var/run/dpdk/spdk_pid71569 00:32:05.531 Removing: /var/run/dpdk/spdk_pid71619 00:32:05.531 Removing: /var/run/dpdk/spdk_pid72008 00:32:05.531 Removing: /var/run/dpdk/spdk_pid72552 00:32:05.531 Removing: /var/run/dpdk/spdk_pid72995 00:32:05.531 Removing: /var/run/dpdk/spdk_pid73982 00:32:05.531 Removing: /var/run/dpdk/spdk_pid74976 00:32:05.531 Removing: /var/run/dpdk/spdk_pid75100 00:32:05.531 Removing: /var/run/dpdk/spdk_pid75162 00:32:05.531 Removing: /var/run/dpdk/spdk_pid76643 00:32:05.531 Removing: /var/run/dpdk/spdk_pid76890 00:32:05.531 Removing: /var/run/dpdk/spdk_pid77343 00:32:05.531 Removing: /var/run/dpdk/spdk_pid77454 00:32:05.531 Removing: /var/run/dpdk/spdk_pid77600 00:32:05.531 Removing: /var/run/dpdk/spdk_pid77646 00:32:05.531 Removing: /var/run/dpdk/spdk_pid77691 00:32:05.531 Removing: /var/run/dpdk/spdk_pid77737 00:32:05.531 Removing: /var/run/dpdk/spdk_pid77895 00:32:05.531 Removing: /var/run/dpdk/spdk_pid78053 00:32:05.531 Removing: /var/run/dpdk/spdk_pid78324 00:32:05.531 Removing: /var/run/dpdk/spdk_pid78450 00:32:05.531 Removing: /var/run/dpdk/spdk_pid78705 00:32:05.531 Removing: /var/run/dpdk/spdk_pid78829 00:32:05.531 Removing: /var/run/dpdk/spdk_pid78969 00:32:05.531 Removing: /var/run/dpdk/spdk_pid79321 00:32:05.531 Removing: /var/run/dpdk/spdk_pid79758 00:32:05.531 Removing: /var/run/dpdk/spdk_pid80067 00:32:05.531 Removing: /var/run/dpdk/spdk_pid80579 00:32:05.531 Removing: /var/run/dpdk/spdk_pid80583 00:32:05.531 Removing: /var/run/dpdk/spdk_pid80935 00:32:05.531 Removing: /var/run/dpdk/spdk_pid80955 00:32:05.531 Removing: /var/run/dpdk/spdk_pid80973 00:32:05.531 Removing: /var/run/dpdk/spdk_pid81000 00:32:05.531 Removing: /var/run/dpdk/spdk_pid81012 00:32:05.531 Removing: /var/run/dpdk/spdk_pid81315 00:32:05.531 Removing: /var/run/dpdk/spdk_pid81368 00:32:05.531 Removing: /var/run/dpdk/spdk_pid81705 00:32:05.531 Removing: /var/run/dpdk/spdk_pid81961 00:32:05.531 Removing: /var/run/dpdk/spdk_pid82466 00:32:05.531 Removing: /var/run/dpdk/spdk_pid83013 00:32:05.531 Removing: /var/run/dpdk/spdk_pid83607 00:32:05.531 Removing: /var/run/dpdk/spdk_pid83609 00:32:05.531 Removing: /var/run/dpdk/spdk_pid85595 00:32:05.531 Removing: /var/run/dpdk/spdk_pid85691 00:32:05.531 Removing: /var/run/dpdk/spdk_pid85782 00:32:05.531 Removing: /var/run/dpdk/spdk_pid85878 00:32:05.531 Removing: /var/run/dpdk/spdk_pid86049 00:32:05.531 Removing: /var/run/dpdk/spdk_pid86141 00:32:05.531 Removing: /var/run/dpdk/spdk_pid86231 00:32:05.531 Removing: /var/run/dpdk/spdk_pid86328 00:32:05.531 Removing: /var/run/dpdk/spdk_pid86671 00:32:05.531 Removing: /var/run/dpdk/spdk_pid87372 00:32:05.531 Removing: /var/run/dpdk/spdk_pid88739 00:32:05.531 Removing: /var/run/dpdk/spdk_pid88945 00:32:05.531 Removing: /var/run/dpdk/spdk_pid89228 00:32:05.531 Removing: /var/run/dpdk/spdk_pid89543 00:32:05.532 Removing: /var/run/dpdk/spdk_pid90110 00:32:05.532 Removing: /var/run/dpdk/spdk_pid90119 00:32:05.532 Removing: /var/run/dpdk/spdk_pid90486 00:32:05.532 Removing: /var/run/dpdk/spdk_pid90649 00:32:05.532 Removing: /var/run/dpdk/spdk_pid90817 00:32:05.532 Removing: /var/run/dpdk/spdk_pid90928 00:32:05.532 Removing: /var/run/dpdk/spdk_pid91081 00:32:05.532 Removing: /var/run/dpdk/spdk_pid91201 00:32:05.790 Removing: /var/run/dpdk/spdk_pid91878 00:32:05.790 Removing: /var/run/dpdk/spdk_pid91913 00:32:05.790 Removing: /var/run/dpdk/spdk_pid91953 00:32:05.790 Removing: /var/run/dpdk/spdk_pid92214 00:32:05.790 Removing: /var/run/dpdk/spdk_pid92246 00:32:05.790 Removing: /var/run/dpdk/spdk_pid92280 00:32:05.790 Removing: /var/run/dpdk/spdk_pid92711 00:32:05.790 Removing: /var/run/dpdk/spdk_pid92746 00:32:05.790 Removing: /var/run/dpdk/spdk_pid93223 00:32:05.790 Clean 00:32:05.790 13:42:23 -- common/autotest_common.sh@1437 -- # return 0 00:32:05.790 13:42:23 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:32:05.790 13:42:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:05.790 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:32:05.790 13:42:23 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:32:05.790 13:42:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:05.790 13:42:23 -- common/autotest_common.sh@10 -- # set +x 00:32:05.790 13:42:23 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:05.790 13:42:23 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:05.790 13:42:23 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:05.790 13:42:23 -- spdk/autotest.sh@389 -- # hash lcov 00:32:05.790 13:42:23 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:05.790 13:42:23 -- spdk/autotest.sh@391 -- # hostname 00:32:05.790 13:42:23 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:06.047 geninfo: WARNING: invalid characters removed from testname! 00:32:32.586 13:42:48 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:35.878 13:42:52 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:38.412 13:42:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:40.945 13:42:58 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:43.512 13:43:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:46.793 13:43:03 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:49.322 13:43:06 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:49.581 13:43:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:49.581 13:43:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:49.581 13:43:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.581 13:43:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.582 13:43:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.582 13:43:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.582 13:43:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.582 13:43:06 -- paths/export.sh@5 -- $ export PATH 00:32:49.582 13:43:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.582 13:43:06 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:49.582 13:43:06 -- common/autobuild_common.sh@435 -- $ date +%s 00:32:49.582 13:43:06 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714138986.XXXXXX 00:32:49.582 13:43:06 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714138986.oJ0TZk 00:32:49.582 13:43:06 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:32:49.582 13:43:06 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:32:49.582 13:43:06 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:49.582 13:43:06 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:49.582 13:43:06 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:49.582 13:43:06 -- common/autobuild_common.sh@451 -- $ get_config_params 00:32:49.582 13:43:06 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:32:49.582 13:43:06 -- common/autotest_common.sh@10 -- $ set +x 00:32:49.582 13:43:06 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:32:49.582 13:43:06 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:32:49.582 13:43:06 -- pm/common@17 -- $ local monitor 00:32:49.582 13:43:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.582 13:43:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94900 00:32:49.582 13:43:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.582 13:43:06 -- pm/common@21 -- $ date +%s 00:32:49.582 13:43:06 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94902 00:32:49.582 13:43:06 -- pm/common@26 -- $ sleep 1 00:32:49.582 13:43:06 -- pm/common@21 -- $ date +%s 00:32:49.582 13:43:06 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714138986 00:32:49.582 13:43:06 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714138986 00:32:49.582 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714138986_collect-vmstat.pm.log 00:32:49.582 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714138986_collect-cpu-load.pm.log 00:32:50.517 13:43:07 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:32:50.517 13:43:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:50.517 13:43:07 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:50.517 13:43:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:50.517 13:43:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:50.517 13:43:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:50.517 13:43:07 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:50.517 13:43:07 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:50.517 13:43:07 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:50.517 13:43:07 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:50.517 13:43:07 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:50.518 13:43:07 -- pm/common@30 -- $ signal_monitor_resources TERM 00:32:50.518 13:43:07 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:32:50.518 13:43:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:50.518 13:43:07 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:50.518 13:43:07 -- pm/common@45 -- $ pid=94909 00:32:50.518 13:43:07 -- pm/common@52 -- $ sudo kill -TERM 94909 00:32:50.776 13:43:07 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:50.776 13:43:07 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:50.776 13:43:07 -- pm/common@45 -- $ pid=94908 00:32:50.776 13:43:07 -- pm/common@52 -- $ sudo kill -TERM 94908 00:32:50.776 + [[ -n 5249 ]] 00:32:50.776 + sudo kill 5249 00:32:50.786 [Pipeline] } 00:32:50.805 [Pipeline] // timeout 00:32:50.811 [Pipeline] } 00:32:50.829 [Pipeline] // stage 00:32:50.835 [Pipeline] } 00:32:50.853 [Pipeline] // catchError 00:32:50.862 [Pipeline] stage 00:32:50.865 [Pipeline] { (Stop VM) 00:32:50.880 [Pipeline] sh 00:32:51.156 + vagrant halt 00:32:54.446 ==> default: Halting domain... 00:33:01.072 [Pipeline] sh 00:33:01.350 + vagrant destroy -f 00:33:05.536 ==> default: Removing domain... 00:33:05.551 [Pipeline] sh 00:33:05.841 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:33:05.850 [Pipeline] } 00:33:05.870 [Pipeline] // stage 00:33:05.877 [Pipeline] } 00:33:05.896 [Pipeline] // dir 00:33:05.903 [Pipeline] } 00:33:05.920 [Pipeline] // wrap 00:33:05.928 [Pipeline] } 00:33:05.944 [Pipeline] // catchError 00:33:05.951 [Pipeline] stage 00:33:05.953 [Pipeline] { (Epilogue) 00:33:05.967 [Pipeline] sh 00:33:06.245 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:12.818 [Pipeline] catchError 00:33:12.820 [Pipeline] { 00:33:12.835 [Pipeline] sh 00:33:13.113 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:13.113 Artifacts sizes are good 00:33:13.123 [Pipeline] } 00:33:13.141 [Pipeline] // catchError 00:33:13.151 [Pipeline] archiveArtifacts 00:33:13.159 Archiving artifacts 00:33:13.335 [Pipeline] cleanWs 00:33:13.346 [WS-CLEANUP] Deleting project workspace... 00:33:13.346 [WS-CLEANUP] Deferred wipeout is used... 00:33:13.352 [WS-CLEANUP] done 00:33:13.354 [Pipeline] } 00:33:13.372 [Pipeline] // stage 00:33:13.377 [Pipeline] } 00:33:13.395 [Pipeline] // node 00:33:13.401 [Pipeline] End of Pipeline 00:33:13.435 Finished: SUCCESS